00:00:00.001 Started by upstream project "autotest-per-patch" build number 126200 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.098 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.099 The recommended git tool is: git 00:00:00.099 using credential 00000000-0000-0000-0000-000000000002 00:00:00.101 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.139 Fetching changes from the remote Git repository 00:00:00.144 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.174 Using shallow fetch with depth 1 00:00:00.174 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.174 > git --version # timeout=10 00:00:00.198 > git --version # 'git version 2.39.2' 00:00:00.198 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.223 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.223 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.003 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.015 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.027 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:07.028 > git config core.sparsecheckout # timeout=10 00:00:07.038 > git read-tree -mu HEAD # timeout=10 00:00:07.055 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:07.079 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:07.080 > git rev-list --no-walk 642aedf8bba2e584685fe6e0b1310032564b5451 # timeout=10 00:00:07.169 [Pipeline] Start of Pipeline 00:00:07.185 [Pipeline] library 00:00:07.187 Loading library shm_lib@master 00:00:07.187 Library shm_lib@master is cached. Copying from home. 00:00:07.207 [Pipeline] node 00:00:07.214 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.218 [Pipeline] { 00:00:07.227 [Pipeline] catchError 00:00:07.228 [Pipeline] { 00:00:07.239 [Pipeline] wrap 00:00:07.246 [Pipeline] { 00:00:07.251 [Pipeline] stage 00:00:07.253 [Pipeline] { (Prologue) 00:00:07.412 [Pipeline] sh 00:00:07.699 + logger -p user.info -t JENKINS-CI 00:00:07.718 [Pipeline] echo 00:00:07.720 Node: CYP9 00:00:07.728 [Pipeline] sh 00:00:08.032 [Pipeline] setCustomBuildProperty 00:00:08.047 [Pipeline] echo 00:00:08.049 Cleanup processes 00:00:08.055 [Pipeline] sh 00:00:08.341 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.341 1945009 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.358 [Pipeline] sh 00:00:08.647 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.647 ++ grep -v 'sudo pgrep' 00:00:08.647 ++ awk '{print $1}' 00:00:08.647 + sudo kill -9 00:00:08.647 + true 00:00:08.663 [Pipeline] cleanWs 00:00:08.673 [WS-CLEANUP] Deleting project workspace... 00:00:08.673 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.680 [WS-CLEANUP] done 00:00:08.684 [Pipeline] setCustomBuildProperty 00:00:08.699 [Pipeline] sh 00:00:08.983 + sudo git config --global --replace-all safe.directory '*' 00:00:09.080 [Pipeline] httpRequest 00:00:09.111 [Pipeline] echo 00:00:09.113 Sorcerer 10.211.164.101 is alive 00:00:09.123 [Pipeline] httpRequest 00:00:09.128 HttpMethod: GET 00:00:09.129 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:09.130 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:09.154 Response Code: HTTP/1.1 200 OK 00:00:09.155 Success: Status code 200 is in the accepted range: 200,404 00:00:09.155 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:20.458 [Pipeline] sh 00:00:20.743 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:20.760 [Pipeline] httpRequest 00:00:20.786 [Pipeline] echo 00:00:20.788 Sorcerer 10.211.164.101 is alive 00:00:20.796 [Pipeline] httpRequest 00:00:20.801 HttpMethod: GET 00:00:20.802 URL: http://10.211.164.101/packages/spdk_97f71d59dfa61a4ce1d7c76989fa6bdcc3a14e84.tar.gz 00:00:20.802 Sending request to url: http://10.211.164.101/packages/spdk_97f71d59dfa61a4ce1d7c76989fa6bdcc3a14e84.tar.gz 00:00:20.824 Response Code: HTTP/1.1 200 OK 00:00:20.825 Success: Status code 200 is in the accepted range: 200,404 00:00:20.825 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_97f71d59dfa61a4ce1d7c76989fa6bdcc3a14e84.tar.gz 00:01:05.230 [Pipeline] sh 00:01:05.512 + tar --no-same-owner -xf spdk_97f71d59dfa61a4ce1d7c76989fa6bdcc3a14e84.tar.gz 00:01:08.095 [Pipeline] sh 00:01:08.379 + git -C spdk log --oneline -n5 00:01:08.379 97f71d59d nvmf: consolidate listener addition in avahi_entry_group_add_listeners 00:01:08.379 719d03c6a sock/uring: only register net impl if supported 00:01:08.379 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:08.379 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:08.379 6c7c1f57e accel: add sequence outstanding stat 00:01:08.390 [Pipeline] } 00:01:08.403 [Pipeline] // stage 00:01:08.411 [Pipeline] stage 00:01:08.413 [Pipeline] { (Prepare) 00:01:08.429 [Pipeline] writeFile 00:01:08.445 [Pipeline] sh 00:01:08.727 + logger -p user.info -t JENKINS-CI 00:01:08.739 [Pipeline] sh 00:01:09.021 + logger -p user.info -t JENKINS-CI 00:01:09.034 [Pipeline] sh 00:01:09.345 + cat autorun-spdk.conf 00:01:09.345 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.345 SPDK_TEST_NVMF=1 00:01:09.345 SPDK_TEST_NVME_CLI=1 00:01:09.345 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.345 SPDK_TEST_NVMF_NICS=e810 00:01:09.345 SPDK_TEST_VFIOUSER=1 00:01:09.345 SPDK_RUN_UBSAN=1 00:01:09.345 NET_TYPE=phy 00:01:09.353 RUN_NIGHTLY=0 00:01:09.358 [Pipeline] readFile 00:01:09.386 [Pipeline] withEnv 00:01:09.388 [Pipeline] { 00:01:09.405 [Pipeline] sh 00:01:09.692 + set -ex 00:01:09.692 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:09.692 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:09.692 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.692 ++ SPDK_TEST_NVMF=1 00:01:09.692 ++ SPDK_TEST_NVME_CLI=1 00:01:09.692 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.692 ++ SPDK_TEST_NVMF_NICS=e810 00:01:09.692 ++ SPDK_TEST_VFIOUSER=1 00:01:09.692 ++ SPDK_RUN_UBSAN=1 00:01:09.692 ++ NET_TYPE=phy 00:01:09.692 ++ RUN_NIGHTLY=0 00:01:09.692 + case $SPDK_TEST_NVMF_NICS in 00:01:09.692 + DRIVERS=ice 00:01:09.692 + [[ tcp == \r\d\m\a ]] 00:01:09.692 + [[ -n ice ]] 00:01:09.692 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:09.692 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:09.692 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:09.692 rmmod: ERROR: Module irdma is not currently loaded 00:01:09.692 rmmod: ERROR: Module i40iw is not currently loaded 00:01:09.692 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:09.692 + true 00:01:09.692 + for D in $DRIVERS 00:01:09.692 + sudo modprobe ice 00:01:09.692 + exit 0 00:01:09.702 [Pipeline] } 00:01:09.723 [Pipeline] // withEnv 00:01:09.728 [Pipeline] } 00:01:09.743 [Pipeline] // stage 00:01:09.752 [Pipeline] catchError 00:01:09.754 [Pipeline] { 00:01:09.769 [Pipeline] timeout 00:01:09.769 Timeout set to expire in 50 min 00:01:09.771 [Pipeline] { 00:01:09.789 [Pipeline] stage 00:01:09.791 [Pipeline] { (Tests) 00:01:09.809 [Pipeline] sh 00:01:10.095 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.095 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.095 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.095 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:10.096 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:10.096 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.096 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:10.096 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.096 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.096 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.096 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:10.096 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.096 + source /etc/os-release 00:01:10.096 ++ NAME='Fedora Linux' 00:01:10.096 ++ VERSION='38 (Cloud Edition)' 00:01:10.096 ++ ID=fedora 00:01:10.096 ++ VERSION_ID=38 00:01:10.096 ++ VERSION_CODENAME= 00:01:10.096 ++ PLATFORM_ID=platform:f38 00:01:10.096 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:10.096 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:10.096 ++ LOGO=fedora-logo-icon 00:01:10.096 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:10.096 ++ HOME_URL=https://fedoraproject.org/ 00:01:10.096 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:10.096 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:10.096 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:10.096 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:10.096 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:10.096 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:10.096 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:10.096 ++ SUPPORT_END=2024-05-14 00:01:10.096 ++ VARIANT='Cloud Edition' 00:01:10.096 ++ VARIANT_ID=cloud 00:01:10.096 + uname -a 00:01:10.096 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:10.096 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:13.396 Hugepages 00:01:13.396 node hugesize free / total 00:01:13.396 node0 1048576kB 0 / 0 00:01:13.396 node0 2048kB 0 / 0 00:01:13.396 node1 1048576kB 0 / 0 00:01:13.396 node1 2048kB 0 / 0 00:01:13.396 00:01:13.396 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:13.396 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:13.396 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:13.396 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:13.396 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:13.396 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:13.396 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:13.396 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:13.396 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:13.396 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:13.396 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:13.396 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:13.396 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:13.396 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:13.396 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:13.396 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:13.396 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:13.396 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:13.396 + rm -f /tmp/spdk-ld-path 00:01:13.396 + source autorun-spdk.conf 00:01:13.396 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.396 ++ SPDK_TEST_NVMF=1 00:01:13.396 ++ SPDK_TEST_NVME_CLI=1 00:01:13.396 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.396 ++ SPDK_TEST_NVMF_NICS=e810 00:01:13.396 ++ SPDK_TEST_VFIOUSER=1 00:01:13.396 ++ SPDK_RUN_UBSAN=1 00:01:13.396 ++ NET_TYPE=phy 00:01:13.396 ++ RUN_NIGHTLY=0 00:01:13.396 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:13.396 + [[ -n '' ]] 00:01:13.396 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.396 + for M in /var/spdk/build-*-manifest.txt 00:01:13.396 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:13.396 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.396 + for M in /var/spdk/build-*-manifest.txt 00:01:13.396 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:13.396 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.396 ++ uname 00:01:13.396 + [[ Linux == \L\i\n\u\x ]] 00:01:13.396 + sudo dmesg -T 00:01:13.396 + sudo dmesg --clear 00:01:13.396 + dmesg_pid=1945984 00:01:13.396 + [[ Fedora Linux == FreeBSD ]] 00:01:13.396 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.396 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.396 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:13.396 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:13.396 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:13.396 + [[ -x /usr/src/fio-static/fio ]] 00:01:13.396 + export FIO_BIN=/usr/src/fio-static/fio 00:01:13.396 + FIO_BIN=/usr/src/fio-static/fio 00:01:13.396 + sudo dmesg -Tw 00:01:13.396 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:13.396 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:13.396 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:13.396 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.396 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.396 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:13.396 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.396 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.396 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.396 Test configuration: 00:01:13.396 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.396 SPDK_TEST_NVMF=1 00:01:13.396 SPDK_TEST_NVME_CLI=1 00:01:13.396 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.396 SPDK_TEST_NVMF_NICS=e810 00:01:13.396 SPDK_TEST_VFIOUSER=1 00:01:13.396 SPDK_RUN_UBSAN=1 00:01:13.396 NET_TYPE=phy 00:01:13.396 RUN_NIGHTLY=0 15:52:49 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:13.396 15:52:49 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:13.396 15:52:49 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:13.396 15:52:49 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:13.396 15:52:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.396 15:52:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.396 15:52:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.396 15:52:49 -- paths/export.sh@5 -- $ export PATH 00:01:13.396 15:52:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.396 15:52:49 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:13.396 15:52:49 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:13.396 15:52:49 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721051569.XXXXXX 00:01:13.396 15:52:49 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721051569.zML65e 00:01:13.396 15:52:49 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:13.396 15:52:49 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:13.396 15:52:49 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:13.396 15:52:49 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:13.396 15:52:49 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:13.396 15:52:49 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:13.396 15:52:49 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:13.396 15:52:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.396 15:52:49 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:13.396 15:52:49 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:13.396 15:52:49 -- pm/common@17 -- $ local monitor 00:01:13.396 15:52:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.397 15:52:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.397 15:52:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.397 15:52:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.397 15:52:49 -- pm/common@21 -- $ date +%s 00:01:13.397 15:52:49 -- pm/common@21 -- $ date +%s 00:01:13.397 15:52:49 -- pm/common@25 -- $ sleep 1 00:01:13.397 15:52:49 -- pm/common@21 -- $ date +%s 00:01:13.397 15:52:49 -- pm/common@21 -- $ date +%s 00:01:13.397 15:52:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721051569 00:01:13.397 15:52:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721051569 00:01:13.397 15:52:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721051569 00:01:13.397 15:52:49 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721051569 00:01:13.397 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721051569_collect-vmstat.pm.log 00:01:13.397 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721051569_collect-cpu-load.pm.log 00:01:13.397 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721051569_collect-cpu-temp.pm.log 00:01:13.397 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721051569_collect-bmc-pm.bmc.pm.log 00:01:14.338 15:52:50 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:14.338 15:52:50 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:14.338 15:52:50 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:14.338 15:52:50 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.338 15:52:50 -- spdk/autobuild.sh@16 -- $ date -u 00:01:14.338 Mon Jul 15 01:52:50 PM UTC 2024 00:01:14.338 15:52:50 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:14.338 v24.09-pre-203-g97f71d59d 00:01:14.338 15:52:50 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:14.338 15:52:50 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:14.338 15:52:50 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:14.338 15:52:50 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:14.338 15:52:50 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:14.338 15:52:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.338 ************************************ 00:01:14.338 START TEST ubsan 00:01:14.338 ************************************ 00:01:14.338 15:52:50 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:14.338 using ubsan 00:01:14.338 00:01:14.338 real 0m0.000s 00:01:14.338 user 0m0.000s 00:01:14.338 sys 0m0.000s 00:01:14.338 15:52:50 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:14.338 15:52:50 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.338 ************************************ 00:01:14.338 END TEST ubsan 00:01:14.338 ************************************ 00:01:14.598 15:52:50 -- common/autotest_common.sh@1142 -- $ return 0 00:01:14.598 15:52:50 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:14.598 15:52:50 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:14.598 15:52:50 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:14.598 15:52:50 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:14.598 15:52:50 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:14.598 15:52:50 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:14.598 15:52:50 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:14.598 15:52:50 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:14.598 15:52:50 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:14.598 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:14.598 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:15.169 Using 'verbs' RDMA provider 00:01:31.039 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:43.273 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:43.273 Creating mk/config.mk...done. 00:01:43.273 Creating mk/cc.flags.mk...done. 00:01:43.273 Type 'make' to build. 00:01:43.273 15:53:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:43.273 15:53:18 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:43.273 15:53:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:43.273 15:53:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.273 ************************************ 00:01:43.273 START TEST make 00:01:43.273 ************************************ 00:01:43.273 15:53:18 make -- common/autotest_common.sh@1123 -- $ make -j144 00:01:43.273 make[1]: Nothing to be done for 'all'. 00:01:44.217 The Meson build system 00:01:44.217 Version: 1.3.1 00:01:44.217 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:44.217 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.217 Build type: native build 00:01:44.217 Project name: libvfio-user 00:01:44.217 Project version: 0.0.1 00:01:44.217 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:44.217 C linker for the host machine: cc ld.bfd 2.39-16 00:01:44.217 Host machine cpu family: x86_64 00:01:44.217 Host machine cpu: x86_64 00:01:44.217 Run-time dependency threads found: YES 00:01:44.217 Library dl found: YES 00:01:44.218 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:44.218 Run-time dependency json-c found: YES 0.17 00:01:44.218 Run-time dependency cmocka found: YES 1.1.7 00:01:44.218 Program pytest-3 found: NO 00:01:44.218 Program flake8 found: NO 00:01:44.218 Program misspell-fixer found: NO 00:01:44.218 Program restructuredtext-lint found: NO 00:01:44.218 Program valgrind found: YES (/usr/bin/valgrind) 00:01:44.218 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.218 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.218 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.218 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:44.218 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:44.218 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:44.218 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:44.218 Build targets in project: 8 00:01:44.218 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:44.218 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:44.218 00:01:44.218 libvfio-user 0.0.1 00:01:44.218 00:01:44.218 User defined options 00:01:44.218 buildtype : debug 00:01:44.218 default_library: shared 00:01:44.218 libdir : /usr/local/lib 00:01:44.218 00:01:44.218 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.477 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:44.477 [1/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:44.477 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:44.477 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:44.477 [4/37] Compiling C object samples/null.p/null.c.o 00:01:44.738 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:44.738 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:44.738 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:44.738 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:44.738 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:44.738 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:44.738 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:44.738 [12/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:44.738 [13/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:44.738 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:44.738 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:44.738 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:44.738 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:44.738 [18/37] Compiling C object samples/server.p/server.c.o 00:01:44.738 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:44.738 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:44.738 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:44.738 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:44.738 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:44.738 [24/37] Compiling C object samples/client.p/client.c.o 00:01:44.738 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:44.738 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:44.738 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:44.738 [28/37] Linking target samples/client 00:01:44.738 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:44.738 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:44.738 [31/37] Linking target test/unit_tests 00:01:44.738 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:44.999 [33/37] Linking target samples/server 00:01:44.999 [34/37] Linking target samples/null 00:01:44.999 [35/37] Linking target samples/gpio-pci-idio-16 00:01:44.999 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:44.999 [37/37] Linking target samples/lspci 00:01:44.999 INFO: autodetecting backend as ninja 00:01:44.999 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.999 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.261 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:45.261 ninja: no work to do. 00:01:50.553 The Meson build system 00:01:50.553 Version: 1.3.1 00:01:50.553 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:50.553 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:50.553 Build type: native build 00:01:50.553 Program cat found: YES (/usr/bin/cat) 00:01:50.553 Project name: DPDK 00:01:50.553 Project version: 24.03.0 00:01:50.553 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:50.553 C linker for the host machine: cc ld.bfd 2.39-16 00:01:50.553 Host machine cpu family: x86_64 00:01:50.553 Host machine cpu: x86_64 00:01:50.553 Message: ## Building in Developer Mode ## 00:01:50.553 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:50.553 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:50.553 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:50.553 Program python3 found: YES (/usr/bin/python3) 00:01:50.553 Program cat found: YES (/usr/bin/cat) 00:01:50.553 Compiler for C supports arguments -march=native: YES 00:01:50.553 Checking for size of "void *" : 8 00:01:50.553 Checking for size of "void *" : 8 (cached) 00:01:50.553 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:50.553 Library m found: YES 00:01:50.553 Library numa found: YES 00:01:50.553 Has header "numaif.h" : YES 00:01:50.553 Library fdt found: NO 00:01:50.553 Library execinfo found: NO 00:01:50.553 Has header "execinfo.h" : YES 00:01:50.553 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:50.553 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:50.553 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:50.553 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:50.553 Run-time dependency openssl found: YES 3.0.9 00:01:50.553 Run-time dependency libpcap found: YES 1.10.4 00:01:50.553 Has header "pcap.h" with dependency libpcap: YES 00:01:50.553 Compiler for C supports arguments -Wcast-qual: YES 00:01:50.553 Compiler for C supports arguments -Wdeprecated: YES 00:01:50.553 Compiler for C supports arguments -Wformat: YES 00:01:50.553 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:50.553 Compiler for C supports arguments -Wformat-security: NO 00:01:50.553 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:50.553 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:50.553 Compiler for C supports arguments -Wnested-externs: YES 00:01:50.553 Compiler for C supports arguments -Wold-style-definition: YES 00:01:50.553 Compiler for C supports arguments -Wpointer-arith: YES 00:01:50.553 Compiler for C supports arguments -Wsign-compare: YES 00:01:50.553 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:50.553 Compiler for C supports arguments -Wundef: YES 00:01:50.553 Compiler for C supports arguments -Wwrite-strings: YES 00:01:50.553 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:50.553 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:50.553 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:50.553 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:50.553 Program objdump found: YES (/usr/bin/objdump) 00:01:50.553 Compiler for C supports arguments -mavx512f: YES 00:01:50.553 Checking if "AVX512 checking" compiles: YES 00:01:50.553 Fetching value of define "__SSE4_2__" : 1 00:01:50.553 Fetching value of define "__AES__" : 1 00:01:50.553 Fetching value of define "__AVX__" : 1 00:01:50.553 Fetching value of define "__AVX2__" : 1 00:01:50.553 Fetching value of define "__AVX512BW__" : 1 00:01:50.553 Fetching value of define "__AVX512CD__" : 1 00:01:50.553 Fetching value of define "__AVX512DQ__" : 1 00:01:50.553 Fetching value of define "__AVX512F__" : 1 00:01:50.553 Fetching value of define "__AVX512VL__" : 1 00:01:50.553 Fetching value of define "__PCLMUL__" : 1 00:01:50.553 Fetching value of define "__RDRND__" : 1 00:01:50.553 Fetching value of define "__RDSEED__" : 1 00:01:50.553 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:50.553 Fetching value of define "__znver1__" : (undefined) 00:01:50.553 Fetching value of define "__znver2__" : (undefined) 00:01:50.553 Fetching value of define "__znver3__" : (undefined) 00:01:50.553 Fetching value of define "__znver4__" : (undefined) 00:01:50.553 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:50.553 Message: lib/log: Defining dependency "log" 00:01:50.553 Message: lib/kvargs: Defining dependency "kvargs" 00:01:50.553 Message: lib/telemetry: Defining dependency "telemetry" 00:01:50.553 Checking for function "getentropy" : NO 00:01:50.553 Message: lib/eal: Defining dependency "eal" 00:01:50.553 Message: lib/ring: Defining dependency "ring" 00:01:50.553 Message: lib/rcu: Defining dependency "rcu" 00:01:50.553 Message: lib/mempool: Defining dependency "mempool" 00:01:50.553 Message: lib/mbuf: Defining dependency "mbuf" 00:01:50.553 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:50.553 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:50.553 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:50.553 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:50.553 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:50.553 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:50.553 Compiler for C supports arguments -mpclmul: YES 00:01:50.553 Compiler for C supports arguments -maes: YES 00:01:50.553 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.553 Compiler for C supports arguments -mavx512bw: YES 00:01:50.553 Compiler for C supports arguments -mavx512dq: YES 00:01:50.553 Compiler for C supports arguments -mavx512vl: YES 00:01:50.553 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:50.553 Compiler for C supports arguments -mavx2: YES 00:01:50.553 Compiler for C supports arguments -mavx: YES 00:01:50.553 Message: lib/net: Defining dependency "net" 00:01:50.553 Message: lib/meter: Defining dependency "meter" 00:01:50.553 Message: lib/ethdev: Defining dependency "ethdev" 00:01:50.553 Message: lib/pci: Defining dependency "pci" 00:01:50.553 Message: lib/cmdline: Defining dependency "cmdline" 00:01:50.553 Message: lib/hash: Defining dependency "hash" 00:01:50.553 Message: lib/timer: Defining dependency "timer" 00:01:50.553 Message: lib/compressdev: Defining dependency "compressdev" 00:01:50.553 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:50.553 Message: lib/dmadev: Defining dependency "dmadev" 00:01:50.553 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:50.553 Message: lib/power: Defining dependency "power" 00:01:50.553 Message: lib/reorder: Defining dependency "reorder" 00:01:50.554 Message: lib/security: Defining dependency "security" 00:01:50.554 Has header "linux/userfaultfd.h" : YES 00:01:50.554 Has header "linux/vduse.h" : YES 00:01:50.554 Message: lib/vhost: Defining dependency "vhost" 00:01:50.554 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:50.554 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:50.554 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:50.554 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:50.554 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:50.554 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:50.554 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:50.554 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:50.554 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:50.554 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:50.554 Program doxygen found: YES (/usr/bin/doxygen) 00:01:50.554 Configuring doxy-api-html.conf using configuration 00:01:50.554 Configuring doxy-api-man.conf using configuration 00:01:50.554 Program mandb found: YES (/usr/bin/mandb) 00:01:50.554 Program sphinx-build found: NO 00:01:50.554 Configuring rte_build_config.h using configuration 00:01:50.554 Message: 00:01:50.554 ================= 00:01:50.554 Applications Enabled 00:01:50.554 ================= 00:01:50.554 00:01:50.554 apps: 00:01:50.554 00:01:50.554 00:01:50.554 Message: 00:01:50.554 ================= 00:01:50.554 Libraries Enabled 00:01:50.554 ================= 00:01:50.554 00:01:50.554 libs: 00:01:50.554 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:50.554 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:50.554 cryptodev, dmadev, power, reorder, security, vhost, 00:01:50.554 00:01:50.554 Message: 00:01:50.554 =============== 00:01:50.554 Drivers Enabled 00:01:50.554 =============== 00:01:50.554 00:01:50.554 common: 00:01:50.554 00:01:50.554 bus: 00:01:50.554 pci, vdev, 00:01:50.554 mempool: 00:01:50.554 ring, 00:01:50.554 dma: 00:01:50.554 00:01:50.554 net: 00:01:50.554 00:01:50.554 crypto: 00:01:50.554 00:01:50.554 compress: 00:01:50.554 00:01:50.554 vdpa: 00:01:50.554 00:01:50.554 00:01:50.554 Message: 00:01:50.554 ================= 00:01:50.554 Content Skipped 00:01:50.554 ================= 00:01:50.554 00:01:50.554 apps: 00:01:50.554 dumpcap: explicitly disabled via build config 00:01:50.554 graph: explicitly disabled via build config 00:01:50.554 pdump: explicitly disabled via build config 00:01:50.554 proc-info: explicitly disabled via build config 00:01:50.554 test-acl: explicitly disabled via build config 00:01:50.554 test-bbdev: explicitly disabled via build config 00:01:50.554 test-cmdline: explicitly disabled via build config 00:01:50.554 test-compress-perf: explicitly disabled via build config 00:01:50.554 test-crypto-perf: explicitly disabled via build config 00:01:50.554 test-dma-perf: explicitly disabled via build config 00:01:50.554 test-eventdev: explicitly disabled via build config 00:01:50.554 test-fib: explicitly disabled via build config 00:01:50.554 test-flow-perf: explicitly disabled via build config 00:01:50.554 test-gpudev: explicitly disabled via build config 00:01:50.554 test-mldev: explicitly disabled via build config 00:01:50.554 test-pipeline: explicitly disabled via build config 00:01:50.554 test-pmd: explicitly disabled via build config 00:01:50.554 test-regex: explicitly disabled via build config 00:01:50.554 test-sad: explicitly disabled via build config 00:01:50.554 test-security-perf: explicitly disabled via build config 00:01:50.554 00:01:50.554 libs: 00:01:50.554 argparse: explicitly disabled via build config 00:01:50.554 metrics: explicitly disabled via build config 00:01:50.554 acl: explicitly disabled via build config 00:01:50.554 bbdev: explicitly disabled via build config 00:01:50.554 bitratestats: explicitly disabled via build config 00:01:50.554 bpf: explicitly disabled via build config 00:01:50.554 cfgfile: explicitly disabled via build config 00:01:50.554 distributor: explicitly disabled via build config 00:01:50.554 efd: explicitly disabled via build config 00:01:50.554 eventdev: explicitly disabled via build config 00:01:50.554 dispatcher: explicitly disabled via build config 00:01:50.554 gpudev: explicitly disabled via build config 00:01:50.554 gro: explicitly disabled via build config 00:01:50.554 gso: explicitly disabled via build config 00:01:50.554 ip_frag: explicitly disabled via build config 00:01:50.554 jobstats: explicitly disabled via build config 00:01:50.554 latencystats: explicitly disabled via build config 00:01:50.554 lpm: explicitly disabled via build config 00:01:50.554 member: explicitly disabled via build config 00:01:50.554 pcapng: explicitly disabled via build config 00:01:50.554 rawdev: explicitly disabled via build config 00:01:50.554 regexdev: explicitly disabled via build config 00:01:50.554 mldev: explicitly disabled via build config 00:01:50.554 rib: explicitly disabled via build config 00:01:50.554 sched: explicitly disabled via build config 00:01:50.554 stack: explicitly disabled via build config 00:01:50.554 ipsec: explicitly disabled via build config 00:01:50.554 pdcp: explicitly disabled via build config 00:01:50.554 fib: explicitly disabled via build config 00:01:50.554 port: explicitly disabled via build config 00:01:50.554 pdump: explicitly disabled via build config 00:01:50.554 table: explicitly disabled via build config 00:01:50.554 pipeline: explicitly disabled via build config 00:01:50.554 graph: explicitly disabled via build config 00:01:50.554 node: explicitly disabled via build config 00:01:50.554 00:01:50.554 drivers: 00:01:50.554 common/cpt: not in enabled drivers build config 00:01:50.554 common/dpaax: not in enabled drivers build config 00:01:50.554 common/iavf: not in enabled drivers build config 00:01:50.554 common/idpf: not in enabled drivers build config 00:01:50.554 common/ionic: not in enabled drivers build config 00:01:50.554 common/mvep: not in enabled drivers build config 00:01:50.554 common/octeontx: not in enabled drivers build config 00:01:50.554 bus/auxiliary: not in enabled drivers build config 00:01:50.554 bus/cdx: not in enabled drivers build config 00:01:50.554 bus/dpaa: not in enabled drivers build config 00:01:50.554 bus/fslmc: not in enabled drivers build config 00:01:50.554 bus/ifpga: not in enabled drivers build config 00:01:50.554 bus/platform: not in enabled drivers build config 00:01:50.554 bus/uacce: not in enabled drivers build config 00:01:50.554 bus/vmbus: not in enabled drivers build config 00:01:50.554 common/cnxk: not in enabled drivers build config 00:01:50.554 common/mlx5: not in enabled drivers build config 00:01:50.554 common/nfp: not in enabled drivers build config 00:01:50.554 common/nitrox: not in enabled drivers build config 00:01:50.554 common/qat: not in enabled drivers build config 00:01:50.554 common/sfc_efx: not in enabled drivers build config 00:01:50.554 mempool/bucket: not in enabled drivers build config 00:01:50.554 mempool/cnxk: not in enabled drivers build config 00:01:50.554 mempool/dpaa: not in enabled drivers build config 00:01:50.554 mempool/dpaa2: not in enabled drivers build config 00:01:50.554 mempool/octeontx: not in enabled drivers build config 00:01:50.554 mempool/stack: not in enabled drivers build config 00:01:50.554 dma/cnxk: not in enabled drivers build config 00:01:50.554 dma/dpaa: not in enabled drivers build config 00:01:50.554 dma/dpaa2: not in enabled drivers build config 00:01:50.554 dma/hisilicon: not in enabled drivers build config 00:01:50.554 dma/idxd: not in enabled drivers build config 00:01:50.554 dma/ioat: not in enabled drivers build config 00:01:50.554 dma/skeleton: not in enabled drivers build config 00:01:50.554 net/af_packet: not in enabled drivers build config 00:01:50.554 net/af_xdp: not in enabled drivers build config 00:01:50.554 net/ark: not in enabled drivers build config 00:01:50.554 net/atlantic: not in enabled drivers build config 00:01:50.554 net/avp: not in enabled drivers build config 00:01:50.554 net/axgbe: not in enabled drivers build config 00:01:50.554 net/bnx2x: not in enabled drivers build config 00:01:50.554 net/bnxt: not in enabled drivers build config 00:01:50.554 net/bonding: not in enabled drivers build config 00:01:50.554 net/cnxk: not in enabled drivers build config 00:01:50.554 net/cpfl: not in enabled drivers build config 00:01:50.554 net/cxgbe: not in enabled drivers build config 00:01:50.554 net/dpaa: not in enabled drivers build config 00:01:50.554 net/dpaa2: not in enabled drivers build config 00:01:50.554 net/e1000: not in enabled drivers build config 00:01:50.554 net/ena: not in enabled drivers build config 00:01:50.554 net/enetc: not in enabled drivers build config 00:01:50.554 net/enetfec: not in enabled drivers build config 00:01:50.554 net/enic: not in enabled drivers build config 00:01:50.554 net/failsafe: not in enabled drivers build config 00:01:50.554 net/fm10k: not in enabled drivers build config 00:01:50.554 net/gve: not in enabled drivers build config 00:01:50.554 net/hinic: not in enabled drivers build config 00:01:50.554 net/hns3: not in enabled drivers build config 00:01:50.554 net/i40e: not in enabled drivers build config 00:01:50.554 net/iavf: not in enabled drivers build config 00:01:50.554 net/ice: not in enabled drivers build config 00:01:50.554 net/idpf: not in enabled drivers build config 00:01:50.554 net/igc: not in enabled drivers build config 00:01:50.554 net/ionic: not in enabled drivers build config 00:01:50.554 net/ipn3ke: not in enabled drivers build config 00:01:50.554 net/ixgbe: not in enabled drivers build config 00:01:50.554 net/mana: not in enabled drivers build config 00:01:50.554 net/memif: not in enabled drivers build config 00:01:50.554 net/mlx4: not in enabled drivers build config 00:01:50.554 net/mlx5: not in enabled drivers build config 00:01:50.554 net/mvneta: not in enabled drivers build config 00:01:50.554 net/mvpp2: not in enabled drivers build config 00:01:50.554 net/netvsc: not in enabled drivers build config 00:01:50.554 net/nfb: not in enabled drivers build config 00:01:50.554 net/nfp: not in enabled drivers build config 00:01:50.554 net/ngbe: not in enabled drivers build config 00:01:50.554 net/null: not in enabled drivers build config 00:01:50.554 net/octeontx: not in enabled drivers build config 00:01:50.554 net/octeon_ep: not in enabled drivers build config 00:01:50.554 net/pcap: not in enabled drivers build config 00:01:50.554 net/pfe: not in enabled drivers build config 00:01:50.554 net/qede: not in enabled drivers build config 00:01:50.554 net/ring: not in enabled drivers build config 00:01:50.554 net/sfc: not in enabled drivers build config 00:01:50.554 net/softnic: not in enabled drivers build config 00:01:50.554 net/tap: not in enabled drivers build config 00:01:50.554 net/thunderx: not in enabled drivers build config 00:01:50.554 net/txgbe: not in enabled drivers build config 00:01:50.554 net/vdev_netvsc: not in enabled drivers build config 00:01:50.554 net/vhost: not in enabled drivers build config 00:01:50.554 net/virtio: not in enabled drivers build config 00:01:50.554 net/vmxnet3: not in enabled drivers build config 00:01:50.554 raw/*: missing internal dependency, "rawdev" 00:01:50.554 crypto/armv8: not in enabled drivers build config 00:01:50.554 crypto/bcmfs: not in enabled drivers build config 00:01:50.554 crypto/caam_jr: not in enabled drivers build config 00:01:50.554 crypto/ccp: not in enabled drivers build config 00:01:50.554 crypto/cnxk: not in enabled drivers build config 00:01:50.554 crypto/dpaa_sec: not in enabled drivers build config 00:01:50.554 crypto/dpaa2_sec: not in enabled drivers build config 00:01:50.554 crypto/ipsec_mb: not in enabled drivers build config 00:01:50.554 crypto/mlx5: not in enabled drivers build config 00:01:50.554 crypto/mvsam: not in enabled drivers build config 00:01:50.554 crypto/nitrox: not in enabled drivers build config 00:01:50.554 crypto/null: not in enabled drivers build config 00:01:50.554 crypto/octeontx: not in enabled drivers build config 00:01:50.554 crypto/openssl: not in enabled drivers build config 00:01:50.554 crypto/scheduler: not in enabled drivers build config 00:01:50.554 crypto/uadk: not in enabled drivers build config 00:01:50.554 crypto/virtio: not in enabled drivers build config 00:01:50.554 compress/isal: not in enabled drivers build config 00:01:50.554 compress/mlx5: not in enabled drivers build config 00:01:50.554 compress/nitrox: not in enabled drivers build config 00:01:50.554 compress/octeontx: not in enabled drivers build config 00:01:50.554 compress/zlib: not in enabled drivers build config 00:01:50.554 regex/*: missing internal dependency, "regexdev" 00:01:50.554 ml/*: missing internal dependency, "mldev" 00:01:50.554 vdpa/ifc: not in enabled drivers build config 00:01:50.554 vdpa/mlx5: not in enabled drivers build config 00:01:50.554 vdpa/nfp: not in enabled drivers build config 00:01:50.554 vdpa/sfc: not in enabled drivers build config 00:01:50.555 event/*: missing internal dependency, "eventdev" 00:01:50.555 baseband/*: missing internal dependency, "bbdev" 00:01:50.555 gpu/*: missing internal dependency, "gpudev" 00:01:50.555 00:01:50.555 00:01:50.815 Build targets in project: 84 00:01:50.815 00:01:50.815 DPDK 24.03.0 00:01:50.815 00:01:50.815 User defined options 00:01:50.815 buildtype : debug 00:01:50.815 default_library : shared 00:01:50.815 libdir : lib 00:01:50.816 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:50.816 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:50.816 c_link_args : 00:01:50.816 cpu_instruction_set: native 00:01:50.816 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:50.816 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:50.816 enable_docs : false 00:01:50.816 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:50.816 enable_kmods : false 00:01:50.816 max_lcores : 128 00:01:50.816 tests : false 00:01:50.816 00:01:50.816 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.075 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:51.342 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:51.342 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:51.342 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:51.342 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:51.342 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:51.342 [6/267] Linking static target lib/librte_kvargs.a 00:01:51.342 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:51.342 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:51.342 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:51.342 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:51.342 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.342 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:51.342 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:51.342 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.600 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:51.600 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.600 [17/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:51.601 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:51.601 [19/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:51.601 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:51.601 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:51.601 [22/267] Linking static target lib/librte_log.a 00:01:51.601 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:51.601 [24/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:51.601 [25/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:51.601 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:51.601 [27/267] Linking static target lib/librte_pci.a 00:01:51.601 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.601 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:51.601 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:51.601 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:51.601 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:51.601 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:51.601 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.601 [35/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:51.860 [36/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:51.860 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:51.860 [38/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:51.860 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:51.860 [40/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:51.860 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.860 [42/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.860 [43/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.860 [44/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.860 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:51.860 [46/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.860 [47/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.860 [48/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:51.860 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:51.860 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.860 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:51.860 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.860 [53/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:51.860 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.860 [55/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:51.860 [56/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:51.860 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.860 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:51.860 [59/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:51.860 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.860 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.123 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:52.123 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:52.123 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:52.123 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:52.123 [66/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:52.123 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:52.123 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:52.123 [69/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:52.123 [70/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.123 [71/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:52.123 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:52.123 [73/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:52.123 [74/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:52.123 [75/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.123 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:52.123 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:52.123 [78/267] Linking static target lib/librte_telemetry.a 00:01:52.123 [79/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:52.123 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:52.123 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:52.123 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:52.123 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:52.123 [84/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:52.123 [85/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:52.123 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:52.123 [87/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:52.123 [88/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:52.123 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.123 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.123 [91/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.123 [92/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:52.123 [93/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:52.123 [94/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:52.123 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.123 [96/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.123 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:52.123 [98/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.123 [99/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:52.123 [100/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:52.123 [101/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:52.123 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:52.123 [103/267] Linking static target lib/librte_meter.a 00:01:52.123 [104/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.123 [105/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.123 [106/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:52.123 [107/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:52.123 [108/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.123 [109/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.123 [110/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:52.123 [111/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:52.123 [112/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:52.123 [113/267] Linking static target lib/librte_reorder.a 00:01:52.123 [114/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:52.123 [115/267] Linking static target lib/librte_ring.a 00:01:52.123 [116/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:52.123 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.123 [118/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.123 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:52.123 [120/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:52.123 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.123 [122/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:52.123 [123/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:52.123 [124/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.123 [125/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.123 [126/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:52.123 [127/267] Linking static target lib/librte_cmdline.a 00:01:52.123 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.123 [129/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:52.123 [130/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:52.123 [131/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:52.123 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.123 [133/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.123 [134/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.123 [135/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:52.123 [136/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.123 [137/267] Linking static target lib/librte_timer.a 00:01:52.123 [138/267] Linking static target lib/librte_compressdev.a 00:01:52.123 [139/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:52.123 [140/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:52.123 [141/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.124 [142/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.124 [143/267] Linking static target lib/librte_dmadev.a 00:01:52.124 [144/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.124 [145/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:52.124 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.124 [147/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:52.124 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.124 [149/267] Linking target lib/librte_log.so.24.1 00:01:52.124 [150/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:52.124 [151/267] Linking static target lib/librte_net.a 00:01:52.124 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.124 [153/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:52.124 [154/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:52.124 [155/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.124 [156/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.124 [157/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.124 [158/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.124 [159/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:52.124 [160/267] Linking static target lib/librte_power.a 00:01:52.124 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:52.124 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.124 [163/267] Linking static target lib/librte_rcu.a 00:01:52.124 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.124 [165/267] Linking static target lib/librte_eal.a 00:01:52.124 [166/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:52.124 [167/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.124 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.124 [169/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.124 [170/267] Linking static target lib/librte_mempool.a 00:01:52.124 [171/267] Linking static target lib/librte_mbuf.a 00:01:52.124 [172/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:52.386 [173/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:52.386 [174/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:52.386 [175/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:52.386 [176/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:52.386 [177/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:52.386 [178/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:52.386 [179/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:52.386 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.386 [181/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:52.386 [182/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:52.386 [183/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:52.386 [184/267] Linking static target lib/librte_security.a 00:01:52.386 [185/267] Linking target lib/librte_kvargs.so.24.1 00:01:52.386 [186/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.386 [187/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:52.386 [188/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.386 [189/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.386 [190/267] Linking static target drivers/librte_bus_vdev.a 00:01:52.386 [191/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:52.386 [192/267] Linking static target lib/librte_hash.a 00:01:52.386 [193/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:52.386 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:52.386 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:52.386 [196/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.386 [197/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.386 [198/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.386 [199/267] Linking static target drivers/librte_bus_pci.a 00:01:52.386 [200/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:52.648 [201/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.648 [202/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.648 [203/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:52.648 [204/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.648 [205/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.648 [206/267] Linking static target drivers/librte_mempool_ring.a 00:01:52.648 [207/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.648 [208/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.648 [209/267] Linking target lib/librte_telemetry.so.24.1 00:01:52.648 [210/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:52.648 [211/267] Linking static target lib/librte_cryptodev.a 00:01:52.648 [212/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.910 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:52.910 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.910 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.910 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.910 [217/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:52.910 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:52.910 [219/267] Linking static target lib/librte_ethdev.a 00:01:53.171 [220/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.171 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.171 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.171 [223/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.432 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.432 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.432 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.029 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:54.029 [228/267] Linking static target lib/librte_vhost.a 00:01:54.974 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.918 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.546 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.933 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.933 [233/267] Linking target lib/librte_eal.so.24.1 00:02:03.933 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:04.194 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:04.194 [236/267] Linking target lib/librte_ring.so.24.1 00:02:04.194 [237/267] Linking target lib/librte_meter.so.24.1 00:02:04.194 [238/267] Linking target lib/librte_pci.so.24.1 00:02:04.194 [239/267] Linking target lib/librte_timer.so.24.1 00:02:04.194 [240/267] Linking target lib/librte_dmadev.so.24.1 00:02:04.194 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:04.194 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:04.194 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:04.194 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:04.194 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:04.194 [246/267] Linking target lib/librte_mempool.so.24.1 00:02:04.194 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:04.194 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:04.454 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:04.455 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:04.455 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:04.455 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:04.715 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:04.715 [254/267] Linking target lib/librte_compressdev.so.24.1 00:02:04.715 [255/267] Linking target lib/librte_reorder.so.24.1 00:02:04.715 [256/267] Linking target lib/librte_net.so.24.1 00:02:04.715 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:04.715 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:04.715 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:04.975 [260/267] Linking target lib/librte_cmdline.so.24.1 00:02:04.975 [261/267] Linking target lib/librte_hash.so.24.1 00:02:04.975 [262/267] Linking target lib/librte_security.so.24.1 00:02:04.975 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:04.975 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:04.975 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:04.975 [266/267] Linking target lib/librte_power.so.24.1 00:02:04.975 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:05.236 INFO: autodetecting backend as ninja 00:02:05.236 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:06.179 CC lib/ut/ut.o 00:02:06.179 CC lib/ut_mock/mock.o 00:02:06.179 CC lib/log/log.o 00:02:06.179 CC lib/log/log_flags.o 00:02:06.179 CC lib/log/log_deprecated.o 00:02:06.440 LIB libspdk_ut.a 00:02:06.440 LIB libspdk_log.a 00:02:06.440 LIB libspdk_ut_mock.a 00:02:06.440 SO libspdk_ut.so.2.0 00:02:06.440 SO libspdk_ut_mock.so.6.0 00:02:06.440 SO libspdk_log.so.7.0 00:02:06.440 SYMLINK libspdk_ut.so 00:02:06.440 SYMLINK libspdk_ut_mock.so 00:02:06.440 SYMLINK libspdk_log.so 00:02:07.012 CC lib/util/base64.o 00:02:07.012 CC lib/dma/dma.o 00:02:07.012 CC lib/util/bit_array.o 00:02:07.012 CC lib/util/cpuset.o 00:02:07.012 CXX lib/trace_parser/trace.o 00:02:07.012 CC lib/util/crc16.o 00:02:07.012 CC lib/util/crc32.o 00:02:07.012 CC lib/ioat/ioat.o 00:02:07.012 CC lib/util/crc32c.o 00:02:07.012 CC lib/util/crc32_ieee.o 00:02:07.012 CC lib/util/fd.o 00:02:07.012 CC lib/util/crc64.o 00:02:07.012 CC lib/util/dif.o 00:02:07.012 CC lib/util/file.o 00:02:07.012 CC lib/util/hexlify.o 00:02:07.012 CC lib/util/iov.o 00:02:07.012 CC lib/util/math.o 00:02:07.012 CC lib/util/pipe.o 00:02:07.012 CC lib/util/strerror_tls.o 00:02:07.012 CC lib/util/string.o 00:02:07.012 CC lib/util/uuid.o 00:02:07.012 CC lib/util/fd_group.o 00:02:07.012 CC lib/util/xor.o 00:02:07.012 CC lib/util/zipf.o 00:02:07.012 CC lib/vfio_user/host/vfio_user_pci.o 00:02:07.012 CC lib/vfio_user/host/vfio_user.o 00:02:07.012 LIB libspdk_dma.a 00:02:07.012 SO libspdk_dma.so.4.0 00:02:07.012 LIB libspdk_ioat.a 00:02:07.012 SYMLINK libspdk_dma.so 00:02:07.273 SO libspdk_ioat.so.7.0 00:02:07.273 SYMLINK libspdk_ioat.so 00:02:07.273 LIB libspdk_vfio_user.a 00:02:07.273 SO libspdk_vfio_user.so.5.0 00:02:07.273 LIB libspdk_util.a 00:02:07.273 SYMLINK libspdk_vfio_user.so 00:02:07.273 SO libspdk_util.so.9.1 00:02:07.534 SYMLINK libspdk_util.so 00:02:07.534 LIB libspdk_trace_parser.a 00:02:07.794 SO libspdk_trace_parser.so.5.0 00:02:07.794 SYMLINK libspdk_trace_parser.so 00:02:07.794 CC lib/rdma_utils/rdma_utils.o 00:02:07.794 CC lib/idxd/idxd.o 00:02:07.794 CC lib/idxd/idxd_kernel.o 00:02:07.794 CC lib/idxd/idxd_user.o 00:02:07.794 CC lib/rdma_provider/common.o 00:02:07.794 CC lib/conf/conf.o 00:02:07.794 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:07.794 CC lib/json/json_parse.o 00:02:07.794 CC lib/env_dpdk/env.o 00:02:07.794 CC lib/json/json_util.o 00:02:07.794 CC lib/env_dpdk/memory.o 00:02:07.794 CC lib/vmd/vmd.o 00:02:07.794 CC lib/json/json_write.o 00:02:07.794 CC lib/env_dpdk/pci.o 00:02:07.794 CC lib/vmd/led.o 00:02:07.794 CC lib/env_dpdk/init.o 00:02:07.794 CC lib/env_dpdk/threads.o 00:02:07.794 CC lib/env_dpdk/pci_ioat.o 00:02:07.794 CC lib/env_dpdk/pci_virtio.o 00:02:07.794 CC lib/env_dpdk/pci_vmd.o 00:02:07.794 CC lib/env_dpdk/pci_idxd.o 00:02:07.794 CC lib/env_dpdk/pci_event.o 00:02:07.794 CC lib/env_dpdk/sigbus_handler.o 00:02:07.794 CC lib/env_dpdk/pci_dpdk.o 00:02:07.794 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:07.794 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:08.054 LIB libspdk_rdma_provider.a 00:02:08.054 LIB libspdk_conf.a 00:02:08.054 SO libspdk_rdma_provider.so.6.0 00:02:08.054 SO libspdk_conf.so.6.0 00:02:08.054 LIB libspdk_rdma_utils.a 00:02:08.315 LIB libspdk_json.a 00:02:08.315 SO libspdk_rdma_utils.so.1.0 00:02:08.315 SYMLINK libspdk_rdma_provider.so 00:02:08.315 SO libspdk_json.so.6.0 00:02:08.315 SYMLINK libspdk_conf.so 00:02:08.315 SYMLINK libspdk_rdma_utils.so 00:02:08.315 SYMLINK libspdk_json.so 00:02:08.315 LIB libspdk_idxd.a 00:02:08.315 SO libspdk_idxd.so.12.0 00:02:08.575 LIB libspdk_vmd.a 00:02:08.575 SYMLINK libspdk_idxd.so 00:02:08.575 SO libspdk_vmd.so.6.0 00:02:08.575 SYMLINK libspdk_vmd.so 00:02:08.575 CC lib/jsonrpc/jsonrpc_server.o 00:02:08.575 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:08.575 CC lib/jsonrpc/jsonrpc_client.o 00:02:08.575 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:08.837 LIB libspdk_jsonrpc.a 00:02:08.837 SO libspdk_jsonrpc.so.6.0 00:02:09.099 SYMLINK libspdk_jsonrpc.so 00:02:09.099 LIB libspdk_env_dpdk.a 00:02:09.099 SO libspdk_env_dpdk.so.14.1 00:02:09.359 SYMLINK libspdk_env_dpdk.so 00:02:09.359 CC lib/rpc/rpc.o 00:02:09.621 LIB libspdk_rpc.a 00:02:09.621 SO libspdk_rpc.so.6.0 00:02:09.621 SYMLINK libspdk_rpc.so 00:02:10.193 CC lib/trace/trace.o 00:02:10.193 CC lib/trace/trace_flags.o 00:02:10.193 CC lib/trace/trace_rpc.o 00:02:10.193 CC lib/notify/notify.o 00:02:10.193 CC lib/keyring/keyring.o 00:02:10.193 CC lib/notify/notify_rpc.o 00:02:10.193 CC lib/keyring/keyring_rpc.o 00:02:10.193 LIB libspdk_notify.a 00:02:10.193 LIB libspdk_keyring.a 00:02:10.193 SO libspdk_notify.so.6.0 00:02:10.193 LIB libspdk_trace.a 00:02:10.193 SO libspdk_keyring.so.1.0 00:02:10.193 SYMLINK libspdk_notify.so 00:02:10.193 SO libspdk_trace.so.10.0 00:02:10.455 SYMLINK libspdk_keyring.so 00:02:10.455 SYMLINK libspdk_trace.so 00:02:10.716 CC lib/thread/thread.o 00:02:10.716 CC lib/sock/sock.o 00:02:10.716 CC lib/thread/iobuf.o 00:02:10.716 CC lib/sock/sock_rpc.o 00:02:11.287 LIB libspdk_sock.a 00:02:11.287 SO libspdk_sock.so.10.0 00:02:11.287 SYMLINK libspdk_sock.so 00:02:11.549 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:11.549 CC lib/nvme/nvme_ctrlr.o 00:02:11.549 CC lib/nvme/nvme_fabric.o 00:02:11.549 CC lib/nvme/nvme_ns_cmd.o 00:02:11.549 CC lib/nvme/nvme_ns.o 00:02:11.549 CC lib/nvme/nvme_pcie_common.o 00:02:11.549 CC lib/nvme/nvme_pcie.o 00:02:11.549 CC lib/nvme/nvme_qpair.o 00:02:11.549 CC lib/nvme/nvme.o 00:02:11.549 CC lib/nvme/nvme_quirks.o 00:02:11.549 CC lib/nvme/nvme_transport.o 00:02:11.549 CC lib/nvme/nvme_discovery.o 00:02:11.549 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:11.549 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:11.549 CC lib/nvme/nvme_tcp.o 00:02:11.549 CC lib/nvme/nvme_opal.o 00:02:11.549 CC lib/nvme/nvme_io_msg.o 00:02:11.549 CC lib/nvme/nvme_poll_group.o 00:02:11.549 CC lib/nvme/nvme_zns.o 00:02:11.549 CC lib/nvme/nvme_stubs.o 00:02:11.549 CC lib/nvme/nvme_vfio_user.o 00:02:11.549 CC lib/nvme/nvme_cuse.o 00:02:11.549 CC lib/nvme/nvme_auth.o 00:02:11.549 CC lib/nvme/nvme_rdma.o 00:02:12.122 LIB libspdk_thread.a 00:02:12.122 SO libspdk_thread.so.10.1 00:02:12.122 SYMLINK libspdk_thread.so 00:02:12.384 CC lib/blob/blobstore.o 00:02:12.384 CC lib/blob/request.o 00:02:12.384 CC lib/blob/zeroes.o 00:02:12.384 CC lib/blob/blob_bs_dev.o 00:02:12.384 CC lib/virtio/virtio.o 00:02:12.384 CC lib/virtio/virtio_vfio_user.o 00:02:12.384 CC lib/virtio/virtio_vhost_user.o 00:02:12.384 CC lib/virtio/virtio_pci.o 00:02:12.384 CC lib/vfu_tgt/tgt_endpoint.o 00:02:12.384 CC lib/vfu_tgt/tgt_rpc.o 00:02:12.384 CC lib/accel/accel.o 00:02:12.384 CC lib/accel/accel_rpc.o 00:02:12.384 CC lib/accel/accel_sw.o 00:02:12.645 CC lib/init/json_config.o 00:02:12.645 CC lib/init/subsystem.o 00:02:12.645 CC lib/init/subsystem_rpc.o 00:02:12.645 CC lib/init/rpc.o 00:02:12.645 LIB libspdk_init.a 00:02:12.907 SO libspdk_init.so.5.0 00:02:12.907 LIB libspdk_virtio.a 00:02:12.907 LIB libspdk_vfu_tgt.a 00:02:12.907 SO libspdk_virtio.so.7.0 00:02:12.907 SO libspdk_vfu_tgt.so.3.0 00:02:12.907 SYMLINK libspdk_init.so 00:02:12.907 SYMLINK libspdk_virtio.so 00:02:12.907 SYMLINK libspdk_vfu_tgt.so 00:02:13.168 CC lib/event/app.o 00:02:13.168 CC lib/event/reactor.o 00:02:13.168 CC lib/event/log_rpc.o 00:02:13.168 CC lib/event/app_rpc.o 00:02:13.168 CC lib/event/scheduler_static.o 00:02:13.429 LIB libspdk_accel.a 00:02:13.429 SO libspdk_accel.so.15.1 00:02:13.429 LIB libspdk_nvme.a 00:02:13.429 SYMLINK libspdk_accel.so 00:02:13.693 SO libspdk_nvme.so.13.1 00:02:13.693 LIB libspdk_event.a 00:02:13.693 SO libspdk_event.so.14.0 00:02:13.693 SYMLINK libspdk_event.so 00:02:14.021 CC lib/bdev/bdev.o 00:02:14.021 CC lib/bdev/bdev_rpc.o 00:02:14.021 CC lib/bdev/bdev_zone.o 00:02:14.021 CC lib/bdev/part.o 00:02:14.021 CC lib/bdev/scsi_nvme.o 00:02:14.021 SYMLINK libspdk_nvme.so 00:02:14.964 LIB libspdk_blob.a 00:02:15.225 SO libspdk_blob.so.11.0 00:02:15.225 SYMLINK libspdk_blob.so 00:02:15.486 CC lib/lvol/lvol.o 00:02:15.486 CC lib/blobfs/blobfs.o 00:02:15.486 CC lib/blobfs/tree.o 00:02:16.059 LIB libspdk_bdev.a 00:02:16.059 SO libspdk_bdev.so.15.1 00:02:16.321 SYMLINK libspdk_bdev.so 00:02:16.321 LIB libspdk_blobfs.a 00:02:16.321 SO libspdk_blobfs.so.10.0 00:02:16.321 LIB libspdk_lvol.a 00:02:16.321 SYMLINK libspdk_blobfs.so 00:02:16.583 SO libspdk_lvol.so.10.0 00:02:16.583 SYMLINK libspdk_lvol.so 00:02:16.583 CC lib/nbd/nbd.o 00:02:16.583 CC lib/nbd/nbd_rpc.o 00:02:16.583 CC lib/nvmf/ctrlr.o 00:02:16.583 CC lib/nvmf/ctrlr_discovery.o 00:02:16.583 CC lib/nvmf/ctrlr_bdev.o 00:02:16.583 CC lib/nvmf/subsystem.o 00:02:16.583 CC lib/nvmf/nvmf.o 00:02:16.583 CC lib/ublk/ublk.o 00:02:16.583 CC lib/nvmf/nvmf_rpc.o 00:02:16.583 CC lib/scsi/dev.o 00:02:16.583 CC lib/ublk/ublk_rpc.o 00:02:16.583 CC lib/nvmf/transport.o 00:02:16.583 CC lib/scsi/lun.o 00:02:16.583 CC lib/nvmf/tcp.o 00:02:16.583 CC lib/scsi/port.o 00:02:16.583 CC lib/nvmf/stubs.o 00:02:16.583 CC lib/nvmf/mdns_server.o 00:02:16.583 CC lib/scsi/scsi.o 00:02:16.583 CC lib/nvmf/vfio_user.o 00:02:16.583 CC lib/ftl/ftl_core.o 00:02:16.583 CC lib/scsi/scsi_bdev.o 00:02:16.583 CC lib/nvmf/rdma.o 00:02:16.583 CC lib/scsi/scsi_pr.o 00:02:16.583 CC lib/ftl/ftl_init.o 00:02:16.583 CC lib/scsi/scsi_rpc.o 00:02:16.583 CC lib/nvmf/auth.o 00:02:16.583 CC lib/ftl/ftl_layout.o 00:02:16.583 CC lib/scsi/task.o 00:02:16.583 CC lib/ftl/ftl_debug.o 00:02:16.583 CC lib/ftl/ftl_io.o 00:02:16.583 CC lib/ftl/ftl_sb.o 00:02:16.583 CC lib/ftl/ftl_l2p.o 00:02:16.583 CC lib/ftl/ftl_l2p_flat.o 00:02:16.583 CC lib/ftl/ftl_nv_cache.o 00:02:16.583 CC lib/ftl/ftl_band.o 00:02:16.583 CC lib/ftl/ftl_band_ops.o 00:02:16.583 CC lib/ftl/ftl_writer.o 00:02:16.583 CC lib/ftl/ftl_rq.o 00:02:16.583 CC lib/ftl/ftl_l2p_cache.o 00:02:16.583 CC lib/ftl/ftl_reloc.o 00:02:16.583 CC lib/ftl/ftl_p2l.o 00:02:16.583 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:16.583 CC lib/ftl/mngt/ftl_mngt.o 00:02:16.583 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:16.583 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:16.583 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:16.583 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:16.583 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:16.583 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:16.583 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:16.583 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:16.583 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:16.583 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:16.583 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:16.583 CC lib/ftl/utils/ftl_conf.o 00:02:16.583 CC lib/ftl/utils/ftl_md.o 00:02:16.583 CC lib/ftl/utils/ftl_property.o 00:02:16.583 CC lib/ftl/utils/ftl_bitmap.o 00:02:16.583 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:16.583 CC lib/ftl/utils/ftl_mempool.o 00:02:16.583 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:16.583 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:16.583 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:16.583 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:16.583 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:16.583 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:16.583 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:16.583 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:16.583 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:16.583 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:16.583 CC lib/ftl/base/ftl_base_dev.o 00:02:16.583 CC lib/ftl/base/ftl_base_bdev.o 00:02:16.583 CC lib/ftl/ftl_trace.o 00:02:17.153 LIB libspdk_nbd.a 00:02:17.153 SO libspdk_nbd.so.7.0 00:02:17.153 LIB libspdk_scsi.a 00:02:17.153 SYMLINK libspdk_nbd.so 00:02:17.414 SO libspdk_scsi.so.9.0 00:02:17.414 LIB libspdk_ublk.a 00:02:17.414 SO libspdk_ublk.so.3.0 00:02:17.414 SYMLINK libspdk_scsi.so 00:02:17.414 SYMLINK libspdk_ublk.so 00:02:17.673 LIB libspdk_ftl.a 00:02:17.673 CC lib/vhost/vhost.o 00:02:17.673 CC lib/vhost/vhost_rpc.o 00:02:17.673 CC lib/vhost/vhost_scsi.o 00:02:17.673 CC lib/vhost/vhost_blk.o 00:02:17.673 CC lib/vhost/rte_vhost_user.o 00:02:17.673 CC lib/iscsi/conn.o 00:02:17.673 CC lib/iscsi/init_grp.o 00:02:17.673 CC lib/iscsi/iscsi.o 00:02:17.673 CC lib/iscsi/portal_grp.o 00:02:17.673 CC lib/iscsi/md5.o 00:02:17.673 CC lib/iscsi/param.o 00:02:17.673 CC lib/iscsi/tgt_node.o 00:02:17.673 CC lib/iscsi/iscsi_subsystem.o 00:02:17.673 CC lib/iscsi/iscsi_rpc.o 00:02:17.673 CC lib/iscsi/task.o 00:02:17.673 SO libspdk_ftl.so.9.0 00:02:18.241 SYMLINK libspdk_ftl.so 00:02:18.502 LIB libspdk_nvmf.a 00:02:18.502 SO libspdk_nvmf.so.18.1 00:02:18.762 LIB libspdk_vhost.a 00:02:18.762 SO libspdk_vhost.so.8.0 00:02:18.762 SYMLINK libspdk_nvmf.so 00:02:18.763 SYMLINK libspdk_vhost.so 00:02:18.763 LIB libspdk_iscsi.a 00:02:19.024 SO libspdk_iscsi.so.8.0 00:02:19.024 SYMLINK libspdk_iscsi.so 00:02:19.598 CC module/env_dpdk/env_dpdk_rpc.o 00:02:19.598 CC module/vfu_device/vfu_virtio.o 00:02:19.598 CC module/vfu_device/vfu_virtio_blk.o 00:02:19.598 CC module/vfu_device/vfu_virtio_rpc.o 00:02:19.598 CC module/vfu_device/vfu_virtio_scsi.o 00:02:19.860 LIB libspdk_env_dpdk_rpc.a 00:02:19.860 CC module/accel/iaa/accel_iaa.o 00:02:19.860 CC module/accel/iaa/accel_iaa_rpc.o 00:02:19.860 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:19.860 CC module/blob/bdev/blob_bdev.o 00:02:19.860 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:19.860 CC module/keyring/file/keyring.o 00:02:19.860 CC module/accel/ioat/accel_ioat.o 00:02:19.860 CC module/keyring/file/keyring_rpc.o 00:02:19.860 CC module/accel/error/accel_error.o 00:02:19.860 CC module/accel/ioat/accel_ioat_rpc.o 00:02:19.860 CC module/accel/dsa/accel_dsa.o 00:02:19.860 CC module/keyring/linux/keyring_rpc.o 00:02:19.860 CC module/keyring/linux/keyring.o 00:02:19.860 CC module/accel/error/accel_error_rpc.o 00:02:19.860 CC module/scheduler/gscheduler/gscheduler.o 00:02:19.860 CC module/accel/dsa/accel_dsa_rpc.o 00:02:19.860 CC module/sock/posix/posix.o 00:02:19.860 SO libspdk_env_dpdk_rpc.so.6.0 00:02:19.860 SYMLINK libspdk_env_dpdk_rpc.so 00:02:20.122 LIB libspdk_scheduler_dpdk_governor.a 00:02:20.122 LIB libspdk_keyring_file.a 00:02:20.122 LIB libspdk_keyring_linux.a 00:02:20.122 LIB libspdk_scheduler_gscheduler.a 00:02:20.122 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:20.122 LIB libspdk_accel_error.a 00:02:20.122 LIB libspdk_scheduler_dynamic.a 00:02:20.122 LIB libspdk_accel_iaa.a 00:02:20.122 SO libspdk_keyring_file.so.1.0 00:02:20.122 SO libspdk_keyring_linux.so.1.0 00:02:20.122 LIB libspdk_accel_ioat.a 00:02:20.122 SO libspdk_scheduler_gscheduler.so.4.0 00:02:20.122 SO libspdk_accel_error.so.2.0 00:02:20.122 SO libspdk_scheduler_dynamic.so.4.0 00:02:20.122 SO libspdk_accel_iaa.so.3.0 00:02:20.122 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:20.122 LIB libspdk_accel_dsa.a 00:02:20.122 SO libspdk_accel_ioat.so.6.0 00:02:20.122 LIB libspdk_blob_bdev.a 00:02:20.122 SYMLINK libspdk_scheduler_gscheduler.so 00:02:20.122 SYMLINK libspdk_keyring_file.so 00:02:20.122 SYMLINK libspdk_keyring_linux.so 00:02:20.122 SYMLINK libspdk_accel_error.so 00:02:20.122 SO libspdk_blob_bdev.so.11.0 00:02:20.122 SO libspdk_accel_dsa.so.5.0 00:02:20.122 SYMLINK libspdk_scheduler_dynamic.so 00:02:20.122 SYMLINK libspdk_accel_iaa.so 00:02:20.122 SYMLINK libspdk_accel_ioat.so 00:02:20.122 SYMLINK libspdk_blob_bdev.so 00:02:20.122 LIB libspdk_vfu_device.a 00:02:20.122 SYMLINK libspdk_accel_dsa.so 00:02:20.384 SO libspdk_vfu_device.so.3.0 00:02:20.384 SYMLINK libspdk_vfu_device.so 00:02:20.647 LIB libspdk_sock_posix.a 00:02:20.647 SO libspdk_sock_posix.so.6.0 00:02:20.647 SYMLINK libspdk_sock_posix.so 00:02:20.647 CC module/bdev/delay/vbdev_delay.o 00:02:20.647 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:20.647 CC module/bdev/nvme/bdev_nvme.o 00:02:20.647 CC module/bdev/aio/bdev_aio.o 00:02:20.647 CC module/bdev/gpt/gpt.o 00:02:20.647 CC module/bdev/lvol/vbdev_lvol.o 00:02:20.647 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:20.647 CC module/bdev/aio/bdev_aio_rpc.o 00:02:20.647 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:20.647 CC module/bdev/nvme/nvme_rpc.o 00:02:20.647 CC module/bdev/gpt/vbdev_gpt.o 00:02:20.647 CC module/bdev/error/vbdev_error.o 00:02:20.647 CC module/bdev/nvme/bdev_mdns_client.o 00:02:20.907 CC module/bdev/error/vbdev_error_rpc.o 00:02:20.907 CC module/bdev/nvme/vbdev_opal.o 00:02:20.907 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:20.907 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:20.907 CC module/bdev/raid/bdev_raid.o 00:02:20.907 CC module/bdev/raid/bdev_raid_rpc.o 00:02:20.907 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:20.907 CC module/bdev/split/vbdev_split.o 00:02:20.907 CC module/bdev/raid/bdev_raid_sb.o 00:02:20.907 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:20.907 CC module/bdev/ftl/bdev_ftl.o 00:02:20.907 CC module/blobfs/bdev/blobfs_bdev.o 00:02:20.907 CC module/bdev/null/bdev_null.o 00:02:20.907 CC module/bdev/split/vbdev_split_rpc.o 00:02:20.907 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:20.907 CC module/bdev/raid/raid0.o 00:02:20.907 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:20.907 CC module/bdev/null/bdev_null_rpc.o 00:02:20.907 CC module/bdev/passthru/vbdev_passthru.o 00:02:20.907 CC module/bdev/malloc/bdev_malloc.o 00:02:20.907 CC module/bdev/raid/raid1.o 00:02:20.907 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:20.907 CC module/bdev/raid/concat.o 00:02:20.907 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:20.907 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:20.907 CC module/bdev/iscsi/bdev_iscsi.o 00:02:20.907 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:20.907 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:20.907 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:21.168 LIB libspdk_blobfs_bdev.a 00:02:21.168 SO libspdk_blobfs_bdev.so.6.0 00:02:21.168 LIB libspdk_bdev_error.a 00:02:21.168 LIB libspdk_bdev_split.a 00:02:21.168 LIB libspdk_bdev_null.a 00:02:21.168 SO libspdk_bdev_error.so.6.0 00:02:21.168 SO libspdk_bdev_split.so.6.0 00:02:21.168 LIB libspdk_bdev_gpt.a 00:02:21.168 LIB libspdk_bdev_ftl.a 00:02:21.168 SO libspdk_bdev_null.so.6.0 00:02:21.168 LIB libspdk_bdev_aio.a 00:02:21.168 SYMLINK libspdk_blobfs_bdev.so 00:02:21.168 LIB libspdk_bdev_delay.a 00:02:21.168 SO libspdk_bdev_gpt.so.6.0 00:02:21.168 LIB libspdk_bdev_passthru.a 00:02:21.168 LIB libspdk_bdev_zone_block.a 00:02:21.168 SO libspdk_bdev_aio.so.6.0 00:02:21.168 SO libspdk_bdev_ftl.so.6.0 00:02:21.168 SYMLINK libspdk_bdev_error.so 00:02:21.168 SYMLINK libspdk_bdev_null.so 00:02:21.168 LIB libspdk_bdev_iscsi.a 00:02:21.168 SYMLINK libspdk_bdev_split.so 00:02:21.168 SO libspdk_bdev_delay.so.6.0 00:02:21.168 SO libspdk_bdev_passthru.so.6.0 00:02:21.168 SO libspdk_bdev_zone_block.so.6.0 00:02:21.168 SO libspdk_bdev_iscsi.so.6.0 00:02:21.168 LIB libspdk_bdev_malloc.a 00:02:21.168 SYMLINK libspdk_bdev_gpt.so 00:02:21.168 SYMLINK libspdk_bdev_aio.so 00:02:21.168 SYMLINK libspdk_bdev_ftl.so 00:02:21.168 SO libspdk_bdev_malloc.so.6.0 00:02:21.168 SYMLINK libspdk_bdev_delay.so 00:02:21.168 SYMLINK libspdk_bdev_passthru.so 00:02:21.168 SYMLINK libspdk_bdev_zone_block.so 00:02:21.168 SYMLINK libspdk_bdev_iscsi.so 00:02:21.168 LIB libspdk_bdev_lvol.a 00:02:21.429 LIB libspdk_bdev_virtio.a 00:02:21.429 SO libspdk_bdev_lvol.so.6.0 00:02:21.429 SYMLINK libspdk_bdev_malloc.so 00:02:21.429 SO libspdk_bdev_virtio.so.6.0 00:02:21.429 SYMLINK libspdk_bdev_lvol.so 00:02:21.429 SYMLINK libspdk_bdev_virtio.so 00:02:21.690 LIB libspdk_bdev_raid.a 00:02:21.690 SO libspdk_bdev_raid.so.6.0 00:02:21.950 SYMLINK libspdk_bdev_raid.so 00:02:22.894 LIB libspdk_bdev_nvme.a 00:02:22.894 SO libspdk_bdev_nvme.so.7.0 00:02:22.894 SYMLINK libspdk_bdev_nvme.so 00:02:23.466 CC module/event/subsystems/iobuf/iobuf.o 00:02:23.466 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:23.466 CC module/event/subsystems/keyring/keyring.o 00:02:23.466 CC module/event/subsystems/vmd/vmd.o 00:02:23.466 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:23.467 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:23.467 CC module/event/subsystems/scheduler/scheduler.o 00:02:23.467 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:23.467 CC module/event/subsystems/sock/sock.o 00:02:23.727 LIB libspdk_event_keyring.a 00:02:23.727 LIB libspdk_event_vmd.a 00:02:23.727 LIB libspdk_event_vhost_blk.a 00:02:23.727 LIB libspdk_event_scheduler.a 00:02:23.727 LIB libspdk_event_iobuf.a 00:02:23.727 LIB libspdk_event_vfu_tgt.a 00:02:23.727 LIB libspdk_event_sock.a 00:02:23.727 SO libspdk_event_keyring.so.1.0 00:02:23.727 SO libspdk_event_vmd.so.6.0 00:02:23.727 SO libspdk_event_vhost_blk.so.3.0 00:02:23.727 SO libspdk_event_scheduler.so.4.0 00:02:23.728 SO libspdk_event_vfu_tgt.so.3.0 00:02:23.728 SO libspdk_event_iobuf.so.3.0 00:02:23.728 SO libspdk_event_sock.so.5.0 00:02:23.728 SYMLINK libspdk_event_keyring.so 00:02:23.728 SYMLINK libspdk_event_vfu_tgt.so 00:02:23.728 SYMLINK libspdk_event_vmd.so 00:02:23.728 SYMLINK libspdk_event_vhost_blk.so 00:02:23.728 SYMLINK libspdk_event_scheduler.so 00:02:23.728 SYMLINK libspdk_event_sock.so 00:02:23.728 SYMLINK libspdk_event_iobuf.so 00:02:24.300 CC module/event/subsystems/accel/accel.o 00:02:24.300 LIB libspdk_event_accel.a 00:02:24.300 SO libspdk_event_accel.so.6.0 00:02:24.560 SYMLINK libspdk_event_accel.so 00:02:24.821 CC module/event/subsystems/bdev/bdev.o 00:02:24.821 LIB libspdk_event_bdev.a 00:02:25.083 SO libspdk_event_bdev.so.6.0 00:02:25.083 SYMLINK libspdk_event_bdev.so 00:02:25.344 CC module/event/subsystems/scsi/scsi.o 00:02:25.344 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:25.344 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:25.344 CC module/event/subsystems/ublk/ublk.o 00:02:25.344 CC module/event/subsystems/nbd/nbd.o 00:02:25.606 LIB libspdk_event_ublk.a 00:02:25.606 LIB libspdk_event_scsi.a 00:02:25.606 LIB libspdk_event_nbd.a 00:02:25.606 SO libspdk_event_ublk.so.3.0 00:02:25.606 SO libspdk_event_scsi.so.6.0 00:02:25.606 SO libspdk_event_nbd.so.6.0 00:02:25.606 LIB libspdk_event_nvmf.a 00:02:25.606 SYMLINK libspdk_event_ublk.so 00:02:25.606 SYMLINK libspdk_event_scsi.so 00:02:25.606 SO libspdk_event_nvmf.so.6.0 00:02:25.606 SYMLINK libspdk_event_nbd.so 00:02:25.867 SYMLINK libspdk_event_nvmf.so 00:02:26.129 CC module/event/subsystems/iscsi/iscsi.o 00:02:26.129 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:26.129 LIB libspdk_event_vhost_scsi.a 00:02:26.129 LIB libspdk_event_iscsi.a 00:02:26.129 SO libspdk_event_vhost_scsi.so.3.0 00:02:26.392 SO libspdk_event_iscsi.so.6.0 00:02:26.392 SYMLINK libspdk_event_vhost_scsi.so 00:02:26.392 SYMLINK libspdk_event_iscsi.so 00:02:26.655 SO libspdk.so.6.0 00:02:26.655 SYMLINK libspdk.so 00:02:26.916 CXX app/trace/trace.o 00:02:26.916 CC app/trace_record/trace_record.o 00:02:26.916 CC app/spdk_nvme_discover/discovery_aer.o 00:02:26.916 TEST_HEADER include/spdk/accel.h 00:02:26.916 CC test/rpc_client/rpc_client_test.o 00:02:26.916 CC app/spdk_top/spdk_top.o 00:02:26.916 CC app/spdk_nvme_perf/perf.o 00:02:26.916 CC app/spdk_lspci/spdk_lspci.o 00:02:26.916 TEST_HEADER include/spdk/accel_module.h 00:02:26.916 TEST_HEADER include/spdk/assert.h 00:02:26.916 TEST_HEADER include/spdk/barrier.h 00:02:26.916 TEST_HEADER include/spdk/base64.h 00:02:26.916 TEST_HEADER include/spdk/bdev.h 00:02:26.916 CC app/spdk_nvme_identify/identify.o 00:02:26.917 TEST_HEADER include/spdk/bdev_module.h 00:02:26.917 TEST_HEADER include/spdk/bdev_zone.h 00:02:26.917 TEST_HEADER include/spdk/bit_array.h 00:02:26.917 TEST_HEADER include/spdk/bit_pool.h 00:02:26.917 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:26.917 TEST_HEADER include/spdk/blob_bdev.h 00:02:26.917 TEST_HEADER include/spdk/blobfs.h 00:02:26.917 TEST_HEADER include/spdk/blob.h 00:02:26.917 TEST_HEADER include/spdk/conf.h 00:02:26.917 TEST_HEADER include/spdk/config.h 00:02:26.917 TEST_HEADER include/spdk/cpuset.h 00:02:26.917 TEST_HEADER include/spdk/crc16.h 00:02:26.917 TEST_HEADER include/spdk/crc32.h 00:02:26.917 TEST_HEADER include/spdk/crc64.h 00:02:26.917 TEST_HEADER include/spdk/dif.h 00:02:26.917 TEST_HEADER include/spdk/dma.h 00:02:26.917 TEST_HEADER include/spdk/endian.h 00:02:26.917 TEST_HEADER include/spdk/env_dpdk.h 00:02:26.917 TEST_HEADER include/spdk/env.h 00:02:26.917 TEST_HEADER include/spdk/fd_group.h 00:02:26.917 CC app/iscsi_tgt/iscsi_tgt.o 00:02:26.917 TEST_HEADER include/spdk/event.h 00:02:26.917 TEST_HEADER include/spdk/file.h 00:02:26.917 TEST_HEADER include/spdk/fd.h 00:02:26.917 CC app/nvmf_tgt/nvmf_main.o 00:02:26.917 TEST_HEADER include/spdk/ftl.h 00:02:26.917 TEST_HEADER include/spdk/hexlify.h 00:02:26.917 TEST_HEADER include/spdk/gpt_spec.h 00:02:26.917 TEST_HEADER include/spdk/histogram_data.h 00:02:26.917 TEST_HEADER include/spdk/idxd.h 00:02:26.917 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:26.917 TEST_HEADER include/spdk/idxd_spec.h 00:02:26.917 TEST_HEADER include/spdk/init.h 00:02:26.917 CC app/spdk_dd/spdk_dd.o 00:02:26.917 TEST_HEADER include/spdk/ioat.h 00:02:26.917 TEST_HEADER include/spdk/ioat_spec.h 00:02:26.917 TEST_HEADER include/spdk/json.h 00:02:26.917 TEST_HEADER include/spdk/iscsi_spec.h 00:02:26.917 TEST_HEADER include/spdk/jsonrpc.h 00:02:26.917 CC app/spdk_tgt/spdk_tgt.o 00:02:26.917 TEST_HEADER include/spdk/keyring.h 00:02:26.917 TEST_HEADER include/spdk/keyring_module.h 00:02:26.917 TEST_HEADER include/spdk/likely.h 00:02:26.917 TEST_HEADER include/spdk/log.h 00:02:26.917 TEST_HEADER include/spdk/lvol.h 00:02:26.917 TEST_HEADER include/spdk/memory.h 00:02:26.917 TEST_HEADER include/spdk/mmio.h 00:02:26.917 TEST_HEADER include/spdk/nbd.h 00:02:26.917 TEST_HEADER include/spdk/notify.h 00:02:26.917 TEST_HEADER include/spdk/nvme.h 00:02:26.917 TEST_HEADER include/spdk/nvme_intel.h 00:02:26.917 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:26.917 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:26.917 TEST_HEADER include/spdk/nvme_spec.h 00:02:26.917 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:26.917 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:26.917 TEST_HEADER include/spdk/nvme_zns.h 00:02:26.917 TEST_HEADER include/spdk/nvmf_spec.h 00:02:26.917 TEST_HEADER include/spdk/nvmf.h 00:02:26.917 TEST_HEADER include/spdk/nvmf_transport.h 00:02:26.917 TEST_HEADER include/spdk/opal_spec.h 00:02:26.917 TEST_HEADER include/spdk/opal.h 00:02:26.917 TEST_HEADER include/spdk/pci_ids.h 00:02:26.917 TEST_HEADER include/spdk/pipe.h 00:02:26.917 TEST_HEADER include/spdk/reduce.h 00:02:26.917 TEST_HEADER include/spdk/queue.h 00:02:26.917 TEST_HEADER include/spdk/rpc.h 00:02:26.917 TEST_HEADER include/spdk/scheduler.h 00:02:26.917 TEST_HEADER include/spdk/scsi.h 00:02:26.917 TEST_HEADER include/spdk/sock.h 00:02:26.917 TEST_HEADER include/spdk/scsi_spec.h 00:02:26.917 TEST_HEADER include/spdk/stdinc.h 00:02:26.917 TEST_HEADER include/spdk/string.h 00:02:26.917 TEST_HEADER include/spdk/thread.h 00:02:26.917 TEST_HEADER include/spdk/trace.h 00:02:26.917 TEST_HEADER include/spdk/trace_parser.h 00:02:26.917 TEST_HEADER include/spdk/ublk.h 00:02:26.917 TEST_HEADER include/spdk/tree.h 00:02:27.181 TEST_HEADER include/spdk/util.h 00:02:27.181 TEST_HEADER include/spdk/uuid.h 00:02:27.181 TEST_HEADER include/spdk/version.h 00:02:27.181 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:27.181 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:27.181 TEST_HEADER include/spdk/vmd.h 00:02:27.181 TEST_HEADER include/spdk/vhost.h 00:02:27.181 TEST_HEADER include/spdk/xor.h 00:02:27.181 CXX test/cpp_headers/accel.o 00:02:27.181 TEST_HEADER include/spdk/zipf.h 00:02:27.181 CXX test/cpp_headers/accel_module.o 00:02:27.181 CXX test/cpp_headers/assert.o 00:02:27.181 CXX test/cpp_headers/barrier.o 00:02:27.181 CXX test/cpp_headers/base64.o 00:02:27.181 CXX test/cpp_headers/bdev.o 00:02:27.181 CXX test/cpp_headers/bdev_module.o 00:02:27.181 CXX test/cpp_headers/bdev_zone.o 00:02:27.181 CXX test/cpp_headers/bit_array.o 00:02:27.181 CXX test/cpp_headers/bit_pool.o 00:02:27.181 CXX test/cpp_headers/blob_bdev.o 00:02:27.181 CXX test/cpp_headers/blob.o 00:02:27.181 CXX test/cpp_headers/blobfs_bdev.o 00:02:27.181 CXX test/cpp_headers/conf.o 00:02:27.181 CXX test/cpp_headers/blobfs.o 00:02:27.181 CXX test/cpp_headers/cpuset.o 00:02:27.181 CXX test/cpp_headers/crc16.o 00:02:27.181 CXX test/cpp_headers/crc32.o 00:02:27.181 CXX test/cpp_headers/crc64.o 00:02:27.181 CXX test/cpp_headers/config.o 00:02:27.181 CXX test/cpp_headers/dif.o 00:02:27.181 CXX test/cpp_headers/dma.o 00:02:27.181 CXX test/cpp_headers/endian.o 00:02:27.181 CXX test/cpp_headers/env_dpdk.o 00:02:27.181 CXX test/cpp_headers/env.o 00:02:27.181 CXX test/cpp_headers/event.o 00:02:27.181 CXX test/cpp_headers/fd_group.o 00:02:27.181 CXX test/cpp_headers/file.o 00:02:27.181 CXX test/cpp_headers/ftl.o 00:02:27.181 CXX test/cpp_headers/fd.o 00:02:27.181 CXX test/cpp_headers/gpt_spec.o 00:02:27.181 CXX test/cpp_headers/histogram_data.o 00:02:27.181 CXX test/cpp_headers/hexlify.o 00:02:27.181 CXX test/cpp_headers/idxd.o 00:02:27.181 CXX test/cpp_headers/idxd_spec.o 00:02:27.181 CXX test/cpp_headers/ioat_spec.o 00:02:27.181 CXX test/cpp_headers/init.o 00:02:27.181 CXX test/cpp_headers/iscsi_spec.o 00:02:27.181 CXX test/cpp_headers/json.o 00:02:27.181 CXX test/cpp_headers/ioat.o 00:02:27.181 CXX test/cpp_headers/jsonrpc.o 00:02:27.181 CXX test/cpp_headers/keyring.o 00:02:27.181 CXX test/cpp_headers/likely.o 00:02:27.181 CXX test/cpp_headers/lvol.o 00:02:27.181 CXX test/cpp_headers/log.o 00:02:27.181 CXX test/cpp_headers/memory.o 00:02:27.181 CXX test/cpp_headers/notify.o 00:02:27.181 CXX test/cpp_headers/mmio.o 00:02:27.181 CXX test/cpp_headers/keyring_module.o 00:02:27.181 CXX test/cpp_headers/nvme.o 00:02:27.181 CXX test/cpp_headers/nvme_ocssd.o 00:02:27.181 CC app/fio/nvme/fio_plugin.o 00:02:27.181 CXX test/cpp_headers/nvme_spec.o 00:02:27.181 CXX test/cpp_headers/nbd.o 00:02:27.181 CXX test/cpp_headers/nvme_intel.o 00:02:27.181 CXX test/cpp_headers/nvme_zns.o 00:02:27.181 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:27.181 CXX test/cpp_headers/nvmf_transport.o 00:02:27.181 CXX test/cpp_headers/nvmf_cmd.o 00:02:27.181 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:27.181 CXX test/cpp_headers/nvmf.o 00:02:27.181 CXX test/cpp_headers/nvmf_spec.o 00:02:27.181 CXX test/cpp_headers/opal.o 00:02:27.181 CXX test/cpp_headers/opal_spec.o 00:02:27.181 CXX test/cpp_headers/pci_ids.o 00:02:27.181 CXX test/cpp_headers/pipe.o 00:02:27.181 CXX test/cpp_headers/queue.o 00:02:27.181 CC test/app/histogram_perf/histogram_perf.o 00:02:27.181 CXX test/cpp_headers/reduce.o 00:02:27.181 CXX test/cpp_headers/rpc.o 00:02:27.181 CXX test/cpp_headers/stdinc.o 00:02:27.181 CXX test/cpp_headers/scheduler.o 00:02:27.181 CXX test/cpp_headers/scsi.o 00:02:27.181 CXX test/cpp_headers/scsi_spec.o 00:02:27.181 CXX test/cpp_headers/string.o 00:02:27.181 CXX test/cpp_headers/sock.o 00:02:27.181 CXX test/cpp_headers/thread.o 00:02:27.181 CC test/thread/poller_perf/poller_perf.o 00:02:27.181 CXX test/cpp_headers/trace.o 00:02:27.181 CXX test/cpp_headers/trace_parser.o 00:02:27.181 CXX test/cpp_headers/ublk.o 00:02:27.181 CXX test/cpp_headers/tree.o 00:02:27.181 CXX test/cpp_headers/util.o 00:02:27.181 CXX test/cpp_headers/vfio_user_pci.o 00:02:27.181 CXX test/cpp_headers/version.o 00:02:27.181 CXX test/cpp_headers/uuid.o 00:02:27.181 CXX test/cpp_headers/vmd.o 00:02:27.181 CXX test/cpp_headers/vfio_user_spec.o 00:02:27.181 CC examples/util/zipf/zipf.o 00:02:27.181 CXX test/cpp_headers/vhost.o 00:02:27.181 CXX test/cpp_headers/xor.o 00:02:27.181 CXX test/cpp_headers/zipf.o 00:02:27.181 CC test/app/jsoncat/jsoncat.o 00:02:27.181 CC test/env/vtophys/vtophys.o 00:02:27.181 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:27.181 CC test/app/stub/stub.o 00:02:27.181 LINK rpc_client_test 00:02:27.181 CC test/env/pci/pci_ut.o 00:02:27.181 CC test/env/memory/memory_ut.o 00:02:27.181 CC examples/ioat/perf/perf.o 00:02:27.181 LINK spdk_lspci 00:02:27.181 LINK spdk_nvme_discover 00:02:27.181 CC test/app/bdev_svc/bdev_svc.o 00:02:27.181 CC app/fio/bdev/fio_plugin.o 00:02:27.181 CC examples/ioat/verify/verify.o 00:02:27.443 CC test/dma/test_dma/test_dma.o 00:02:27.443 LINK nvmf_tgt 00:02:27.443 LINK spdk_trace_record 00:02:27.702 LINK interrupt_tgt 00:02:27.702 LINK iscsi_tgt 00:02:27.702 CC test/env/mem_callbacks/mem_callbacks.o 00:02:27.702 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:27.702 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:27.702 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:27.702 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:27.702 LINK spdk_tgt 00:02:27.702 LINK stub 00:02:27.702 LINK histogram_perf 00:02:27.702 LINK env_dpdk_post_init 00:02:27.702 LINK spdk_dd 00:02:27.961 LINK vtophys 00:02:27.961 LINK zipf 00:02:27.961 LINK poller_perf 00:02:27.961 LINK jsoncat 00:02:27.961 LINK bdev_svc 00:02:27.961 LINK spdk_trace 00:02:27.961 LINK ioat_perf 00:02:27.961 LINK verify 00:02:28.220 LINK pci_ut 00:02:28.220 LINK test_dma 00:02:28.220 LINK spdk_nvme 00:02:28.220 LINK vhost_fuzz 00:02:28.220 LINK spdk_top 00:02:28.220 LINK spdk_bdev 00:02:28.220 LINK spdk_nvme_perf 00:02:28.220 LINK nvme_fuzz 00:02:28.480 LINK spdk_nvme_identify 00:02:28.480 CC app/vhost/vhost.o 00:02:28.480 CC examples/idxd/perf/perf.o 00:02:28.480 CC test/event/reactor/reactor.o 00:02:28.480 CC test/event/reactor_perf/reactor_perf.o 00:02:28.480 CC examples/sock/hello_world/hello_sock.o 00:02:28.480 CC test/event/event_perf/event_perf.o 00:02:28.480 CC examples/vmd/lsvmd/lsvmd.o 00:02:28.480 LINK mem_callbacks 00:02:28.480 CC examples/vmd/led/led.o 00:02:28.480 CC test/event/app_repeat/app_repeat.o 00:02:28.480 CC examples/thread/thread/thread_ex.o 00:02:28.480 CC test/event/scheduler/scheduler.o 00:02:28.480 LINK reactor 00:02:28.481 LINK app_repeat 00:02:28.481 LINK lsvmd 00:02:28.481 LINK reactor_perf 00:02:28.481 LINK event_perf 00:02:28.481 LINK led 00:02:28.481 LINK vhost 00:02:28.779 LINK hello_sock 00:02:28.779 CC test/nvme/err_injection/err_injection.o 00:02:28.779 LINK idxd_perf 00:02:28.779 CC test/nvme/overhead/overhead.o 00:02:28.779 CC test/nvme/e2edp/nvme_dp.o 00:02:28.779 CC test/nvme/fdp/fdp.o 00:02:28.779 CC test/nvme/reset/reset.o 00:02:28.779 CC test/nvme/aer/aer.o 00:02:28.779 LINK scheduler 00:02:28.779 CC test/nvme/fused_ordering/fused_ordering.o 00:02:28.779 CC test/nvme/connect_stress/connect_stress.o 00:02:28.779 CC test/nvme/sgl/sgl.o 00:02:28.779 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:28.779 CC test/nvme/startup/startup.o 00:02:28.779 CC test/nvme/reserve/reserve.o 00:02:28.779 CC test/nvme/compliance/nvme_compliance.o 00:02:28.779 CC test/nvme/simple_copy/simple_copy.o 00:02:28.779 CC test/nvme/boot_partition/boot_partition.o 00:02:28.779 CC test/nvme/cuse/cuse.o 00:02:28.779 LINK thread 00:02:28.779 CC test/blobfs/mkfs/mkfs.o 00:02:28.779 CC test/accel/dif/dif.o 00:02:28.779 LINK memory_ut 00:02:28.779 CC test/lvol/esnap/esnap.o 00:02:28.779 LINK err_injection 00:02:29.040 LINK doorbell_aers 00:02:29.040 LINK boot_partition 00:02:29.040 LINK connect_stress 00:02:29.040 LINK startup 00:02:29.040 LINK reserve 00:02:29.040 LINK fused_ordering 00:02:29.040 LINK nvme_dp 00:02:29.040 LINK simple_copy 00:02:29.040 LINK sgl 00:02:29.040 LINK aer 00:02:29.040 LINK reset 00:02:29.040 LINK overhead 00:02:29.040 LINK mkfs 00:02:29.040 LINK nvme_compliance 00:02:29.040 LINK fdp 00:02:29.040 CC examples/nvme/reconnect/reconnect.o 00:02:29.040 CC examples/nvme/hello_world/hello_world.o 00:02:29.040 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:29.040 CC examples/nvme/arbitration/arbitration.o 00:02:29.040 CC examples/nvme/abort/abort.o 00:02:29.040 CC examples/nvme/hotplug/hotplug.o 00:02:29.040 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:29.040 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:29.040 LINK dif 00:02:29.302 LINK iscsi_fuzz 00:02:29.302 CC examples/accel/perf/accel_perf.o 00:02:29.302 LINK pmr_persistence 00:02:29.302 LINK cmb_copy 00:02:29.302 CC examples/blob/cli/blobcli.o 00:02:29.302 CC examples/blob/hello_world/hello_blob.o 00:02:29.302 LINK hello_world 00:02:29.302 LINK hotplug 00:02:29.302 LINK reconnect 00:02:29.302 LINK arbitration 00:02:29.302 LINK abort 00:02:29.564 LINK nvme_manage 00:02:29.564 LINK hello_blob 00:02:29.564 LINK accel_perf 00:02:29.825 CC test/bdev/bdevio/bdevio.o 00:02:29.825 LINK blobcli 00:02:29.825 LINK cuse 00:02:30.086 LINK bdevio 00:02:30.348 CC examples/bdev/hello_world/hello_bdev.o 00:02:30.348 CC examples/bdev/bdevperf/bdevperf.o 00:02:30.609 LINK hello_bdev 00:02:30.870 LINK bdevperf 00:02:31.444 CC examples/nvmf/nvmf/nvmf.o 00:02:32.015 LINK nvmf 00:02:32.960 LINK esnap 00:02:33.221 00:02:33.221 real 0m50.711s 00:02:33.221 user 6m33.893s 00:02:33.221 sys 4m37.549s 00:02:33.221 15:54:09 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:33.221 15:54:09 make -- common/autotest_common.sh@10 -- $ set +x 00:02:33.221 ************************************ 00:02:33.221 END TEST make 00:02:33.221 ************************************ 00:02:33.483 15:54:09 -- common/autotest_common.sh@1142 -- $ return 0 00:02:33.483 15:54:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:33.483 15:54:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:33.483 15:54:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:33.483 15:54:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.483 15:54:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:33.483 15:54:09 -- pm/common@44 -- $ pid=1946019 00:02:33.483 15:54:09 -- pm/common@50 -- $ kill -TERM 1946019 00:02:33.483 15:54:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.483 15:54:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:33.483 15:54:09 -- pm/common@44 -- $ pid=1946020 00:02:33.483 15:54:09 -- pm/common@50 -- $ kill -TERM 1946020 00:02:33.483 15:54:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.483 15:54:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:33.483 15:54:09 -- pm/common@44 -- $ pid=1946022 00:02:33.483 15:54:09 -- pm/common@50 -- $ kill -TERM 1946022 00:02:33.483 15:54:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.483 15:54:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:33.483 15:54:09 -- pm/common@44 -- $ pid=1946045 00:02:33.484 15:54:09 -- pm/common@50 -- $ sudo -E kill -TERM 1946045 00:02:33.484 15:54:09 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:33.484 15:54:09 -- nvmf/common.sh@7 -- # uname -s 00:02:33.484 15:54:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:33.484 15:54:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:33.484 15:54:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:33.484 15:54:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:33.484 15:54:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:33.484 15:54:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:33.484 15:54:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:33.484 15:54:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:33.484 15:54:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:33.484 15:54:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:33.484 15:54:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:33.484 15:54:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:33.484 15:54:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:33.484 15:54:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:33.484 15:54:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:33.484 15:54:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:33.484 15:54:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:33.484 15:54:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:33.484 15:54:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:33.484 15:54:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:33.484 15:54:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.484 15:54:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.484 15:54:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.484 15:54:09 -- paths/export.sh@5 -- # export PATH 00:02:33.484 15:54:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.484 15:54:09 -- nvmf/common.sh@47 -- # : 0 00:02:33.484 15:54:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:33.484 15:54:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:33.484 15:54:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:33.484 15:54:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:33.484 15:54:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:33.484 15:54:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:33.484 15:54:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:33.484 15:54:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:33.484 15:54:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:33.484 15:54:09 -- spdk/autotest.sh@32 -- # uname -s 00:02:33.484 15:54:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:33.484 15:54:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:33.484 15:54:09 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:33.484 15:54:09 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:33.484 15:54:09 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:33.484 15:54:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:33.484 15:54:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:33.484 15:54:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:33.484 15:54:09 -- spdk/autotest.sh@48 -- # udevadm_pid=2009243 00:02:33.484 15:54:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:33.484 15:54:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:33.484 15:54:09 -- pm/common@17 -- # local monitor 00:02:33.484 15:54:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.484 15:54:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.484 15:54:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.484 15:54:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.484 15:54:09 -- pm/common@21 -- # date +%s 00:02:33.484 15:54:09 -- pm/common@21 -- # date +%s 00:02:33.484 15:54:09 -- pm/common@25 -- # sleep 1 00:02:33.484 15:54:09 -- pm/common@21 -- # date +%s 00:02:33.484 15:54:09 -- pm/common@21 -- # date +%s 00:02:33.484 15:54:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721051649 00:02:33.484 15:54:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721051649 00:02:33.484 15:54:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721051649 00:02:33.484 15:54:09 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721051649 00:02:33.484 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721051649_collect-vmstat.pm.log 00:02:33.484 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721051649_collect-cpu-load.pm.log 00:02:33.746 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721051649_collect-cpu-temp.pm.log 00:02:33.746 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721051649_collect-bmc-pm.bmc.pm.log 00:02:34.686 15:54:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:34.686 15:54:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:34.686 15:54:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:34.686 15:54:10 -- common/autotest_common.sh@10 -- # set +x 00:02:34.686 15:54:10 -- spdk/autotest.sh@59 -- # create_test_list 00:02:34.686 15:54:10 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:34.686 15:54:10 -- common/autotest_common.sh@10 -- # set +x 00:02:34.686 15:54:10 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:34.686 15:54:10 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.686 15:54:10 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.686 15:54:10 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:34.686 15:54:10 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.686 15:54:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:34.686 15:54:10 -- common/autotest_common.sh@1455 -- # uname 00:02:34.686 15:54:10 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:34.686 15:54:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:34.686 15:54:10 -- common/autotest_common.sh@1475 -- # uname 00:02:34.686 15:54:10 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:34.686 15:54:10 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:34.686 15:54:10 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:34.686 15:54:10 -- spdk/autotest.sh@72 -- # hash lcov 00:02:34.686 15:54:10 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:34.686 15:54:10 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:34.686 --rc lcov_branch_coverage=1 00:02:34.686 --rc lcov_function_coverage=1 00:02:34.686 --rc genhtml_branch_coverage=1 00:02:34.686 --rc genhtml_function_coverage=1 00:02:34.686 --rc genhtml_legend=1 00:02:34.686 --rc geninfo_all_blocks=1 00:02:34.686 ' 00:02:34.686 15:54:10 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:34.686 --rc lcov_branch_coverage=1 00:02:34.686 --rc lcov_function_coverage=1 00:02:34.686 --rc genhtml_branch_coverage=1 00:02:34.686 --rc genhtml_function_coverage=1 00:02:34.686 --rc genhtml_legend=1 00:02:34.686 --rc geninfo_all_blocks=1 00:02:34.686 ' 00:02:34.686 15:54:10 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:34.686 --rc lcov_branch_coverage=1 00:02:34.686 --rc lcov_function_coverage=1 00:02:34.686 --rc genhtml_branch_coverage=1 00:02:34.686 --rc genhtml_function_coverage=1 00:02:34.686 --rc genhtml_legend=1 00:02:34.686 --rc geninfo_all_blocks=1 00:02:34.686 --no-external' 00:02:34.686 15:54:10 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:34.686 --rc lcov_branch_coverage=1 00:02:34.686 --rc lcov_function_coverage=1 00:02:34.686 --rc genhtml_branch_coverage=1 00:02:34.686 --rc genhtml_function_coverage=1 00:02:34.686 --rc genhtml_legend=1 00:02:34.686 --rc geninfo_all_blocks=1 00:02:34.686 --no-external' 00:02:34.686 15:54:10 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:34.686 lcov: LCOV version 1.14 00:02:34.686 15:54:10 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:36.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:36.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:36.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:36.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:36.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:36.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:36.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:36.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:36.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:36.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:36.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:36.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:36.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:36.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:36.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:36.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:36.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:36.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:36.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:36.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:36.592 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:36.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:36.592 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:36.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:36.592 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:36.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:36.592 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:36.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:36.592 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:36.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:36.592 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:36.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:36.592 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:36.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:36.592 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:36.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:36.592 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:36.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:36.592 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:36.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:36.592 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:36.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:36.592 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:36.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:36.592 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:36.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:36.852 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:36.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:36.852 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:36.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:36.852 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:36.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:36.852 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:36.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:36.852 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:36.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:36.852 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:36.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:36.852 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:36.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:36.852 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:36.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:36.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:36.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:36.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:36.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:36.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:36.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:36.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:36.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:36.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:36.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:36.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:36.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:36.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:36.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:36.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:36.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:36.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:36.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:36.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:51.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:51.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:04.059 15:54:39 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:04.059 15:54:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:04.059 15:54:39 -- common/autotest_common.sh@10 -- # set +x 00:03:04.059 15:54:39 -- spdk/autotest.sh@91 -- # rm -f 00:03:04.059 15:54:39 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.383 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:07.383 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:07.383 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:07.383 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:07.383 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:07.383 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:07.383 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:07.383 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:07.383 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:07.383 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:07.383 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:07.383 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:07.383 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:07.383 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:07.383 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:07.383 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:07.383 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:07.643 15:54:43 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:07.644 15:54:43 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:07.644 15:54:43 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:07.644 15:54:43 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:07.644 15:54:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:07.644 15:54:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:07.644 15:54:43 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:07.644 15:54:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:07.644 15:54:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:07.644 15:54:43 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:07.644 15:54:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:07.644 15:54:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:07.644 15:54:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:07.644 15:54:43 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:07.644 15:54:43 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:07.644 No valid GPT data, bailing 00:03:07.644 15:54:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:07.644 15:54:43 -- scripts/common.sh@391 -- # pt= 00:03:07.644 15:54:43 -- scripts/common.sh@392 -- # return 1 00:03:07.644 15:54:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:07.644 1+0 records in 00:03:07.644 1+0 records out 00:03:07.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00168064 s, 624 MB/s 00:03:07.644 15:54:43 -- spdk/autotest.sh@118 -- # sync 00:03:07.644 15:54:43 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:07.644 15:54:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:07.644 15:54:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:15.791 15:54:51 -- spdk/autotest.sh@124 -- # uname -s 00:03:15.791 15:54:51 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:15.791 15:54:51 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:15.791 15:54:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:15.791 15:54:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.791 15:54:51 -- common/autotest_common.sh@10 -- # set +x 00:03:15.791 ************************************ 00:03:15.791 START TEST setup.sh 00:03:15.791 ************************************ 00:03:15.791 15:54:51 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:15.791 * Looking for test storage... 00:03:15.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:15.791 15:54:51 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:15.791 15:54:51 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:15.791 15:54:51 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:15.791 15:54:51 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:15.791 15:54:51 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.791 15:54:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:15.791 ************************************ 00:03:15.791 START TEST acl 00:03:15.791 ************************************ 00:03:15.791 15:54:51 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:16.051 * Looking for test storage... 00:03:16.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:16.051 15:54:51 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:16.051 15:54:51 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:16.051 15:54:51 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:16.051 15:54:51 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:16.051 15:54:51 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:16.051 15:54:51 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:16.051 15:54:51 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:16.051 15:54:51 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:16.051 15:54:51 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:16.051 15:54:51 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:16.051 15:54:51 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:16.051 15:54:51 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:16.051 15:54:51 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:16.051 15:54:51 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:16.051 15:54:51 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.051 15:54:51 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.345 15:54:54 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:19.345 15:54:54 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:19.345 15:54:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.345 15:54:54 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:19.345 15:54:54 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.345 15:54:54 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:22.645 Hugepages 00:03:22.645 node hugesize free / total 00:03:22.645 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:22.645 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:22.645 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 00:03:22.646 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:22.646 15:54:58 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:22.646 15:54:58 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.646 15:54:58 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.646 15:54:58 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:22.907 ************************************ 00:03:22.907 START TEST denied 00:03:22.907 ************************************ 00:03:22.907 15:54:58 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:22.907 15:54:58 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:22.907 15:54:58 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:22.907 15:54:58 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:22.907 15:54:58 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.907 15:54:58 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:27.105 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:27.105 15:55:02 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:27.105 15:55:02 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:27.105 15:55:02 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:27.105 15:55:02 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:27.105 15:55:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:27.105 15:55:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:27.105 15:55:02 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:27.105 15:55:02 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:27.105 15:55:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.105 15:55:02 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.330 00:03:31.330 real 0m8.222s 00:03:31.330 user 0m2.601s 00:03:31.330 sys 0m4.831s 00:03:31.330 15:55:06 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.330 15:55:06 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:31.330 ************************************ 00:03:31.330 END TEST denied 00:03:31.330 ************************************ 00:03:31.330 15:55:06 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:31.330 15:55:06 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:31.330 15:55:06 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.330 15:55:06 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.330 15:55:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.330 ************************************ 00:03:31.330 START TEST allowed 00:03:31.330 ************************************ 00:03:31.330 15:55:06 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:31.330 15:55:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:31.330 15:55:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:31.330 15:55:06 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:31.330 15:55:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.330 15:55:06 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.650 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:36.650 15:55:12 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:36.650 15:55:12 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:36.650 15:55:12 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:36.650 15:55:12 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.650 15:55:12 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.854 00:03:40.854 real 0m9.403s 00:03:40.854 user 0m2.804s 00:03:40.854 sys 0m4.904s 00:03:40.854 15:55:16 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.854 15:55:16 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:40.854 ************************************ 00:03:40.854 END TEST allowed 00:03:40.854 ************************************ 00:03:40.854 15:55:16 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:40.854 00:03:40.854 real 0m24.680s 00:03:40.854 user 0m7.862s 00:03:40.854 sys 0m14.400s 00:03:40.854 15:55:16 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.854 15:55:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:40.854 ************************************ 00:03:40.854 END TEST acl 00:03:40.854 ************************************ 00:03:40.854 15:55:16 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:40.854 15:55:16 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:40.854 15:55:16 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.854 15:55:16 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.854 15:55:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:40.854 ************************************ 00:03:40.854 START TEST hugepages 00:03:40.854 ************************************ 00:03:40.854 15:55:16 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:40.854 * Looking for test storage... 00:03:40.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:40.854 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:40.854 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:40.854 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:40.854 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:40.854 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:40.854 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:40.854 15:55:16 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:40.854 15:55:16 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:40.854 15:55:16 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:40.854 15:55:16 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:40.854 15:55:16 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 102952940 kB' 'MemAvailable: 106438744 kB' 'Buffers: 2704 kB' 'Cached: 14446328 kB' 'SwapCached: 0 kB' 'Active: 11487380 kB' 'Inactive: 3523448 kB' 'Active(anon): 11013196 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565060 kB' 'Mapped: 172768 kB' 'Shmem: 10451400 kB' 'KReclaimable: 529116 kB' 'Slab: 1394232 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 865116 kB' 'KernelStack: 27280 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460892 kB' 'Committed_AS: 12595740 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.855 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.856 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.856 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.856 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:40.856 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:40.856 15:55:16 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:40.856 15:55:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.856 15:55:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.856 15:55:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.856 ************************************ 00:03:40.856 START TEST default_setup 00:03:40.856 ************************************ 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.856 15:55:16 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:44.157 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:44.157 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:44.157 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:44.157 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:44.157 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:44.157 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:44.157 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:44.157 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:44.157 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:44.157 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:44.157 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:44.157 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:44.157 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:44.157 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:44.157 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:44.157 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:44.157 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105050160 kB' 'MemAvailable: 108535964 kB' 'Buffers: 2704 kB' 'Cached: 14446452 kB' 'SwapCached: 0 kB' 'Active: 11503568 kB' 'Inactive: 3523448 kB' 'Active(anon): 11029384 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581264 kB' 'Mapped: 172852 kB' 'Shmem: 10451524 kB' 'KReclaimable: 529116 kB' 'Slab: 1391916 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 862800 kB' 'KernelStack: 27376 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12609324 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.425 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.426 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105055404 kB' 'MemAvailable: 108541208 kB' 'Buffers: 2704 kB' 'Cached: 14446456 kB' 'SwapCached: 0 kB' 'Active: 11504304 kB' 'Inactive: 3523448 kB' 'Active(anon): 11030120 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582144 kB' 'Mapped: 172788 kB' 'Shmem: 10451528 kB' 'KReclaimable: 529116 kB' 'Slab: 1391888 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 862772 kB' 'KernelStack: 27392 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12609344 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235396 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.427 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.428 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105051492 kB' 'MemAvailable: 108537296 kB' 'Buffers: 2704 kB' 'Cached: 14446472 kB' 'SwapCached: 0 kB' 'Active: 11506820 kB' 'Inactive: 3523448 kB' 'Active(anon): 11032636 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584780 kB' 'Mapped: 172788 kB' 'Shmem: 10451544 kB' 'KReclaimable: 529116 kB' 'Slab: 1391968 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 862852 kB' 'KernelStack: 27408 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12609364 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235364 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.429 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.430 nr_hugepages=1024 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.430 resv_hugepages=0 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.430 surplus_hugepages=0 00:03:44.430 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.430 anon_hugepages=0 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105051492 kB' 'MemAvailable: 108537296 kB' 'Buffers: 2704 kB' 'Cached: 14446472 kB' 'SwapCached: 0 kB' 'Active: 11506712 kB' 'Inactive: 3523448 kB' 'Active(anon): 11032528 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584636 kB' 'Mapped: 172788 kB' 'Shmem: 10451544 kB' 'KReclaimable: 529116 kB' 'Slab: 1391968 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 862852 kB' 'KernelStack: 27408 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12609388 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235364 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.431 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52533632 kB' 'MemUsed: 13125376 kB' 'SwapCached: 0 kB' 'Active: 4855936 kB' 'Inactive: 3298680 kB' 'Active(anon): 4703388 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3298680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7841132 kB' 'Mapped: 73380 kB' 'AnonPages: 316984 kB' 'Shmem: 4389904 kB' 'KernelStack: 16472 kB' 'PageTables: 5564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396548 kB' 'Slab: 916444 kB' 'SReclaimable: 396548 kB' 'SUnreclaim: 519896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.432 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.434 node0=1024 expecting 1024 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.434 00:03:44.434 real 0m3.654s 00:03:44.434 user 0m1.306s 00:03:44.434 sys 0m2.268s 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.434 15:55:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:44.434 ************************************ 00:03:44.434 END TEST default_setup 00:03:44.434 ************************************ 00:03:44.434 15:55:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:44.434 15:55:20 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:44.434 15:55:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.434 15:55:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.434 15:55:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.434 ************************************ 00:03:44.434 START TEST per_node_1G_alloc 00:03:44.434 ************************************ 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.434 15:55:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:47.738 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:47.738 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:47.738 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:47.738 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:47.738 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:47.738 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:47.738 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:47.738 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:47.738 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:47.738 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:47.738 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:47.738 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:47.738 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:47.738 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:47.738 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:47.738 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:47.738 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105128360 kB' 'MemAvailable: 108614164 kB' 'Buffers: 2704 kB' 'Cached: 14446612 kB' 'SwapCached: 0 kB' 'Active: 11505808 kB' 'Inactive: 3523448 kB' 'Active(anon): 11031624 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583252 kB' 'Mapped: 171960 kB' 'Shmem: 10451684 kB' 'KReclaimable: 529116 kB' 'Slab: 1391728 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 862612 kB' 'KernelStack: 27264 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12605348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235620 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105128744 kB' 'MemAvailable: 108614548 kB' 'Buffers: 2704 kB' 'Cached: 14446616 kB' 'SwapCached: 0 kB' 'Active: 11505864 kB' 'Inactive: 3523448 kB' 'Active(anon): 11031680 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583380 kB' 'Mapped: 171832 kB' 'Shmem: 10451688 kB' 'KReclaimable: 529116 kB' 'Slab: 1391728 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 862612 kB' 'KernelStack: 27280 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12603380 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105130016 kB' 'MemAvailable: 108615820 kB' 'Buffers: 2704 kB' 'Cached: 14446616 kB' 'SwapCached: 0 kB' 'Active: 11505880 kB' 'Inactive: 3523448 kB' 'Active(anon): 11031696 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583336 kB' 'Mapped: 171912 kB' 'Shmem: 10451688 kB' 'KReclaimable: 529116 kB' 'Slab: 1391720 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 862604 kB' 'KernelStack: 27280 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12605012 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.266 nr_hugepages=1024 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.266 resv_hugepages=0 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.266 surplus_hugepages=0 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.266 anon_hugepages=0 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105131000 kB' 'MemAvailable: 108616804 kB' 'Buffers: 2704 kB' 'Cached: 14446616 kB' 'SwapCached: 0 kB' 'Active: 11506040 kB' 'Inactive: 3523448 kB' 'Active(anon): 11031856 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583496 kB' 'Mapped: 171912 kB' 'Shmem: 10451688 kB' 'KReclaimable: 529116 kB' 'Slab: 1391712 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 862596 kB' 'KernelStack: 27264 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12605032 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53654824 kB' 'MemUsed: 12004184 kB' 'SwapCached: 0 kB' 'Active: 4856244 kB' 'Inactive: 3298680 kB' 'Active(anon): 4703696 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3298680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7841256 kB' 'Mapped: 72964 kB' 'AnonPages: 316852 kB' 'Shmem: 4390028 kB' 'KernelStack: 16360 kB' 'PageTables: 5020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396548 kB' 'Slab: 916136 kB' 'SReclaimable: 396548 kB' 'SUnreclaim: 519588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.267 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51475000 kB' 'MemUsed: 9204872 kB' 'SwapCached: 0 kB' 'Active: 6649412 kB' 'Inactive: 224768 kB' 'Active(anon): 6327776 kB' 'Inactive(anon): 0 kB' 'Active(file): 321636 kB' 'Inactive(file): 224768 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6608128 kB' 'Mapped: 98948 kB' 'AnonPages: 266188 kB' 'Shmem: 6061724 kB' 'KernelStack: 10968 kB' 'PageTables: 3188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132568 kB' 'Slab: 475568 kB' 'SReclaimable: 132568 kB' 'SUnreclaim: 343000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.268 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:48.269 node0=512 expecting 512 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:48.269 node1=512 expecting 512 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:48.269 00:03:48.269 real 0m3.731s 00:03:48.269 user 0m1.459s 00:03:48.269 sys 0m2.319s 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.269 15:55:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.269 ************************************ 00:03:48.269 END TEST per_node_1G_alloc 00:03:48.269 ************************************ 00:03:48.269 15:55:23 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:48.269 15:55:23 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:48.269 15:55:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.269 15:55:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.269 15:55:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.269 ************************************ 00:03:48.269 START TEST even_2G_alloc 00:03:48.269 ************************************ 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.269 15:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.570 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.570 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.570 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.570 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.570 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.570 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.570 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.570 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.570 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.570 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:51.570 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.570 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.570 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.570 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.570 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.570 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.570 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105123528 kB' 'MemAvailable: 108609332 kB' 'Buffers: 2704 kB' 'Cached: 14446800 kB' 'SwapCached: 0 kB' 'Active: 11507920 kB' 'Inactive: 3523448 kB' 'Active(anon): 11033736 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585148 kB' 'Mapped: 172080 kB' 'Shmem: 10451872 kB' 'KReclaimable: 529116 kB' 'Slab: 1392208 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 863092 kB' 'KernelStack: 27456 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12605676 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235812 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.836 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105120948 kB' 'MemAvailable: 108606752 kB' 'Buffers: 2704 kB' 'Cached: 14446804 kB' 'SwapCached: 0 kB' 'Active: 11510336 kB' 'Inactive: 3523448 kB' 'Active(anon): 11036152 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587608 kB' 'Mapped: 172512 kB' 'Shmem: 10451876 kB' 'KReclaimable: 529116 kB' 'Slab: 1392192 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 863076 kB' 'KernelStack: 27536 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12609024 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235828 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.837 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105118416 kB' 'MemAvailable: 108604220 kB' 'Buffers: 2704 kB' 'Cached: 14446820 kB' 'SwapCached: 0 kB' 'Active: 11512892 kB' 'Inactive: 3523448 kB' 'Active(anon): 11038708 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590564 kB' 'Mapped: 172436 kB' 'Shmem: 10451892 kB' 'KReclaimable: 529116 kB' 'Slab: 1392160 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 863044 kB' 'KernelStack: 27472 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12611960 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235816 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.843 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:51.844 nr_hugepages=1024 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.844 resv_hugepages=0 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.844 surplus_hugepages=0 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.844 anon_hugepages=0 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105117656 kB' 'MemAvailable: 108603460 kB' 'Buffers: 2704 kB' 'Cached: 14446844 kB' 'SwapCached: 0 kB' 'Active: 11507428 kB' 'Inactive: 3523448 kB' 'Active(anon): 11033244 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584524 kB' 'Mapped: 172436 kB' 'Shmem: 10451916 kB' 'KReclaimable: 529116 kB' 'Slab: 1392160 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 863044 kB' 'KernelStack: 27520 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12605864 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235812 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.844 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53658432 kB' 'MemUsed: 12000576 kB' 'SwapCached: 0 kB' 'Active: 4856812 kB' 'Inactive: 3298680 kB' 'Active(anon): 4704264 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3298680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7841424 kB' 'Mapped: 72964 kB' 'AnonPages: 317208 kB' 'Shmem: 4390196 kB' 'KernelStack: 16280 kB' 'PageTables: 4752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396548 kB' 'Slab: 916260 kB' 'SReclaimable: 396548 kB' 'SUnreclaim: 519712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.845 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51466524 kB' 'MemUsed: 9213348 kB' 'SwapCached: 0 kB' 'Active: 6650044 kB' 'Inactive: 224768 kB' 'Active(anon): 6328408 kB' 'Inactive(anon): 0 kB' 'Active(file): 321636 kB' 'Inactive(file): 224768 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6608140 kB' 'Mapped: 98968 kB' 'AnonPages: 267224 kB' 'Shmem: 6061736 kB' 'KernelStack: 11112 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132568 kB' 'Slab: 475864 kB' 'SReclaimable: 132568 kB' 'SUnreclaim: 343296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.109 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.110 node0=512 expecting 512 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:52.110 node1=512 expecting 512 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:52.110 00:03:52.110 real 0m3.676s 00:03:52.110 user 0m1.413s 00:03:52.110 sys 0m2.304s 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.110 15:55:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.110 ************************************ 00:03:52.110 END TEST even_2G_alloc 00:03:52.110 ************************************ 00:03:52.110 15:55:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:52.110 15:55:27 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:52.110 15:55:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.110 15:55:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.110 15:55:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.110 ************************************ 00:03:52.110 START TEST odd_alloc 00:03:52.110 ************************************ 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.110 15:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.437 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.437 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.437 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.437 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.437 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.437 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.437 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.437 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.437 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.437 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:55.437 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.437 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.437 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.437 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.437 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.437 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.437 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105106832 kB' 'MemAvailable: 108592636 kB' 'Buffers: 2704 kB' 'Cached: 14446972 kB' 'SwapCached: 0 kB' 'Active: 11506868 kB' 'Inactive: 3523448 kB' 'Active(anon): 11032684 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584004 kB' 'Mapped: 172032 kB' 'Shmem: 10452044 kB' 'KReclaimable: 529116 kB' 'Slab: 1392740 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 863624 kB' 'KernelStack: 27520 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12604932 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235812 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.437 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.438 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.705 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105107880 kB' 'MemAvailable: 108593684 kB' 'Buffers: 2704 kB' 'Cached: 14446976 kB' 'SwapCached: 0 kB' 'Active: 11506840 kB' 'Inactive: 3523448 kB' 'Active(anon): 11032656 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584000 kB' 'Mapped: 172032 kB' 'Shmem: 10452048 kB' 'KReclaimable: 529116 kB' 'Slab: 1392688 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 863572 kB' 'KernelStack: 27312 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12605084 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235764 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.706 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.707 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105109544 kB' 'MemAvailable: 108595348 kB' 'Buffers: 2704 kB' 'Cached: 14446988 kB' 'SwapCached: 0 kB' 'Active: 11507012 kB' 'Inactive: 3523448 kB' 'Active(anon): 11032828 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584036 kB' 'Mapped: 171952 kB' 'Shmem: 10452060 kB' 'KReclaimable: 529116 kB' 'Slab: 1392364 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 863248 kB' 'KernelStack: 27504 kB' 'PageTables: 9028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12606716 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235732 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.708 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:55.709 nr_hugepages=1025 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.709 resv_hugepages=0 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.709 surplus_hugepages=0 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.709 anon_hugepages=0 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.709 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105110716 kB' 'MemAvailable: 108596520 kB' 'Buffers: 2704 kB' 'Cached: 14447024 kB' 'SwapCached: 0 kB' 'Active: 11508000 kB' 'Inactive: 3523448 kB' 'Active(anon): 11033816 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585036 kB' 'Mapped: 171960 kB' 'Shmem: 10452096 kB' 'KReclaimable: 529116 kB' 'Slab: 1392364 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 863248 kB' 'KernelStack: 27472 kB' 'PageTables: 9292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12607104 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235764 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.710 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.711 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53654676 kB' 'MemUsed: 12004332 kB' 'SwapCached: 0 kB' 'Active: 4855244 kB' 'Inactive: 3298680 kB' 'Active(anon): 4702696 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3298680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7841576 kB' 'Mapped: 72968 kB' 'AnonPages: 315548 kB' 'Shmem: 4390348 kB' 'KernelStack: 16520 kB' 'PageTables: 5124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396548 kB' 'Slab: 916376 kB' 'SReclaimable: 396548 kB' 'SUnreclaim: 519828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.712 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51456084 kB' 'MemUsed: 9223788 kB' 'SwapCached: 0 kB' 'Active: 6652596 kB' 'Inactive: 224768 kB' 'Active(anon): 6330960 kB' 'Inactive(anon): 0 kB' 'Active(file): 321636 kB' 'Inactive(file): 224768 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6608200 kB' 'Mapped: 98984 kB' 'AnonPages: 269356 kB' 'Shmem: 6061796 kB' 'KernelStack: 10968 kB' 'PageTables: 3528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132568 kB' 'Slab: 475988 kB' 'SReclaimable: 132568 kB' 'SUnreclaim: 343420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.713 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.714 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.715 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.715 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:55.715 node0=512 expecting 513 00:03:55.715 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.715 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.715 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.715 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:55.715 node1=513 expecting 512 00:03:55.715 15:55:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:55.715 00:03:55.715 real 0m3.673s 00:03:55.715 user 0m1.423s 00:03:55.715 sys 0m2.295s 00:03:55.715 15:55:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.715 15:55:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.715 ************************************ 00:03:55.715 END TEST odd_alloc 00:03:55.715 ************************************ 00:03:55.715 15:55:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:55.715 15:55:31 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:55.715 15:55:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.715 15:55:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.715 15:55:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.715 ************************************ 00:03:55.715 START TEST custom_alloc 00:03:55.715 ************************************ 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:55.715 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.976 15:55:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.282 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.282 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.282 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.282 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.282 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.282 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.282 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.282 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.282 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.282 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:59.282 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.282 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.282 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.282 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.282 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.282 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.282 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.282 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:59.282 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:59.282 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.282 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.282 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.282 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104068216 kB' 'MemAvailable: 107554020 kB' 'Buffers: 2704 kB' 'Cached: 14447168 kB' 'SwapCached: 0 kB' 'Active: 11509036 kB' 'Inactive: 3523448 kB' 'Active(anon): 11034852 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585848 kB' 'Mapped: 172040 kB' 'Shmem: 10452240 kB' 'KReclaimable: 529116 kB' 'Slab: 1391320 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 862204 kB' 'KernelStack: 27456 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12608076 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235860 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.283 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104068992 kB' 'MemAvailable: 107554796 kB' 'Buffers: 2704 kB' 'Cached: 14447172 kB' 'SwapCached: 0 kB' 'Active: 11508224 kB' 'Inactive: 3523448 kB' 'Active(anon): 11034040 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585020 kB' 'Mapped: 171980 kB' 'Shmem: 10452244 kB' 'KReclaimable: 529116 kB' 'Slab: 1391320 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 862204 kB' 'KernelStack: 27376 kB' 'PageTables: 8428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12605040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235684 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.284 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.285 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104070316 kB' 'MemAvailable: 107556120 kB' 'Buffers: 2704 kB' 'Cached: 14447188 kB' 'SwapCached: 0 kB' 'Active: 11507588 kB' 'Inactive: 3523448 kB' 'Active(anon): 11033404 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584512 kB' 'Mapped: 171964 kB' 'Shmem: 10452260 kB' 'KReclaimable: 529116 kB' 'Slab: 1391368 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 862252 kB' 'KernelStack: 27264 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12605060 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.286 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.287 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:59.288 nr_hugepages=1536 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.288 resv_hugepages=0 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.288 surplus_hugepages=0 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.288 anon_hugepages=0 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104070268 kB' 'MemAvailable: 107556072 kB' 'Buffers: 2704 kB' 'Cached: 14447228 kB' 'SwapCached: 0 kB' 'Active: 11507244 kB' 'Inactive: 3523448 kB' 'Active(anon): 11033060 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584108 kB' 'Mapped: 171964 kB' 'Shmem: 10452300 kB' 'KReclaimable: 529116 kB' 'Slab: 1391368 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 862252 kB' 'KernelStack: 27248 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12605080 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.288 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.289 15:55:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.289 15:55:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.289 15:55:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.289 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.289 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.289 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53652632 kB' 'MemUsed: 12006376 kB' 'SwapCached: 0 kB' 'Active: 4857272 kB' 'Inactive: 3298680 kB' 'Active(anon): 4704724 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3298680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7841708 kB' 'Mapped: 72980 kB' 'AnonPages: 317484 kB' 'Shmem: 4390480 kB' 'KernelStack: 16312 kB' 'PageTables: 4908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396548 kB' 'Slab: 915960 kB' 'SReclaimable: 396548 kB' 'SUnreclaim: 519412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.290 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 50418352 kB' 'MemUsed: 10261520 kB' 'SwapCached: 0 kB' 'Active: 6649988 kB' 'Inactive: 224768 kB' 'Active(anon): 6328352 kB' 'Inactive(anon): 0 kB' 'Active(file): 321636 kB' 'Inactive(file): 224768 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6608244 kB' 'Mapped: 98984 kB' 'AnonPages: 266624 kB' 'Shmem: 6061840 kB' 'KernelStack: 10936 kB' 'PageTables: 3432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132568 kB' 'Slab: 475408 kB' 'SReclaimable: 132568 kB' 'SUnreclaim: 342840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.291 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:59.292 node0=512 expecting 512 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:59.292 node1=1024 expecting 1024 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:59.292 00:03:59.292 real 0m3.535s 00:03:59.292 user 0m1.344s 00:03:59.292 sys 0m2.222s 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.292 15:55:35 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.292 ************************************ 00:03:59.292 END TEST custom_alloc 00:03:59.292 ************************************ 00:03:59.292 15:55:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:59.292 15:55:35 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:59.292 15:55:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.292 15:55:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.292 15:55:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.553 ************************************ 00:03:59.553 START TEST no_shrink_alloc 00:03:59.553 ************************************ 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.553 15:55:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.851 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.851 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.851 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.851 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.851 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.851 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.851 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.851 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:02.851 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.851 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:02.851 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.851 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.851 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.851 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.851 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.851 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.851 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105104908 kB' 'MemAvailable: 108590712 kB' 'Buffers: 2704 kB' 'Cached: 14447344 kB' 'SwapCached: 0 kB' 'Active: 11508500 kB' 'Inactive: 3523448 kB' 'Active(anon): 11034316 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585132 kB' 'Mapped: 172064 kB' 'Shmem: 10452416 kB' 'KReclaimable: 529116 kB' 'Slab: 1391468 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 862352 kB' 'KernelStack: 27280 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12605972 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.117 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105105548 kB' 'MemAvailable: 108591352 kB' 'Buffers: 2704 kB' 'Cached: 14447348 kB' 'SwapCached: 0 kB' 'Active: 11508524 kB' 'Inactive: 3523448 kB' 'Active(anon): 11034340 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585208 kB' 'Mapped: 172000 kB' 'Shmem: 10452420 kB' 'KReclaimable: 529116 kB' 'Slab: 1391500 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 862384 kB' 'KernelStack: 27264 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12605992 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.118 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.119 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105105904 kB' 'MemAvailable: 108591708 kB' 'Buffers: 2704 kB' 'Cached: 14447364 kB' 'SwapCached: 0 kB' 'Active: 11508536 kB' 'Inactive: 3523448 kB' 'Active(anon): 11034352 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585208 kB' 'Mapped: 172000 kB' 'Shmem: 10452436 kB' 'KReclaimable: 529116 kB' 'Slab: 1391500 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 862384 kB' 'KernelStack: 27264 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12606012 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.120 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.121 nr_hugepages=1024 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.121 resv_hugepages=0 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.121 surplus_hugepages=0 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.121 anon_hugepages=0 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105112088 kB' 'MemAvailable: 108597892 kB' 'Buffers: 2704 kB' 'Cached: 14447404 kB' 'SwapCached: 0 kB' 'Active: 11508212 kB' 'Inactive: 3523448 kB' 'Active(anon): 11034028 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584820 kB' 'Mapped: 172000 kB' 'Shmem: 10452476 kB' 'KReclaimable: 529116 kB' 'Slab: 1391500 kB' 'SReclaimable: 529116 kB' 'SUnreclaim: 862384 kB' 'KernelStack: 27248 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12606036 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.121 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52610384 kB' 'MemUsed: 13048624 kB' 'SwapCached: 0 kB' 'Active: 4856512 kB' 'Inactive: 3298680 kB' 'Active(anon): 4703964 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3298680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7841872 kB' 'Mapped: 72984 kB' 'AnonPages: 316460 kB' 'Shmem: 4390644 kB' 'KernelStack: 16296 kB' 'PageTables: 4864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396548 kB' 'Slab: 915988 kB' 'SReclaimable: 396548 kB' 'SUnreclaim: 519440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.122 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.123 node0=1024 expecting 1024 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.123 15:55:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.428 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.428 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.428 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.428 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.428 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.428 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.428 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.428 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.428 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.428 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:06.428 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.428 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.428 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.428 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.428 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.428 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.428 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.689 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105100352 kB' 'MemAvailable: 108586164 kB' 'Buffers: 2704 kB' 'Cached: 14447500 kB' 'SwapCached: 0 kB' 'Active: 11510256 kB' 'Inactive: 3523448 kB' 'Active(anon): 11036072 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586788 kB' 'Mapped: 172076 kB' 'Shmem: 10452572 kB' 'KReclaimable: 529124 kB' 'Slab: 1391252 kB' 'SReclaimable: 529124 kB' 'SUnreclaim: 862128 kB' 'KernelStack: 27296 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12607088 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.956 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105103884 kB' 'MemAvailable: 108589696 kB' 'Buffers: 2704 kB' 'Cached: 14447500 kB' 'SwapCached: 0 kB' 'Active: 11509848 kB' 'Inactive: 3523448 kB' 'Active(anon): 11035664 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586412 kB' 'Mapped: 172012 kB' 'Shmem: 10452572 kB' 'KReclaimable: 529124 kB' 'Slab: 1391252 kB' 'SReclaimable: 529124 kB' 'SUnreclaim: 862128 kB' 'KernelStack: 27264 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12607104 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.957 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.958 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105103884 kB' 'MemAvailable: 108589696 kB' 'Buffers: 2704 kB' 'Cached: 14447520 kB' 'SwapCached: 0 kB' 'Active: 11509876 kB' 'Inactive: 3523448 kB' 'Active(anon): 11035692 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586416 kB' 'Mapped: 172012 kB' 'Shmem: 10452592 kB' 'KReclaimable: 529124 kB' 'Slab: 1391252 kB' 'SReclaimable: 529124 kB' 'SUnreclaim: 862128 kB' 'KernelStack: 27264 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12607128 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.959 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.960 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.961 nr_hugepages=1024 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.961 resv_hugepages=0 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.961 surplus_hugepages=0 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.961 anon_hugepages=0 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105104420 kB' 'MemAvailable: 108590232 kB' 'Buffers: 2704 kB' 'Cached: 14447540 kB' 'SwapCached: 0 kB' 'Active: 11509896 kB' 'Inactive: 3523448 kB' 'Active(anon): 11035712 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586420 kB' 'Mapped: 172012 kB' 'Shmem: 10452612 kB' 'KReclaimable: 529124 kB' 'Slab: 1391252 kB' 'SReclaimable: 529124 kB' 'SUnreclaim: 862128 kB' 'KernelStack: 27264 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12607148 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4384116 kB' 'DirectMap2M: 28850176 kB' 'DirectMap1G: 102760448 kB' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.961 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.962 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52604896 kB' 'MemUsed: 13054112 kB' 'SwapCached: 0 kB' 'Active: 4856572 kB' 'Inactive: 3298680 kB' 'Active(anon): 4704024 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3298680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7841988 kB' 'Mapped: 72980 kB' 'AnonPages: 316444 kB' 'Shmem: 4390760 kB' 'KernelStack: 16312 kB' 'PageTables: 4868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396548 kB' 'Slab: 915968 kB' 'SReclaimable: 396548 kB' 'SUnreclaim: 519420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.963 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.964 node0=1024 expecting 1024 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.964 00:04:06.964 real 0m7.595s 00:04:06.964 user 0m2.961s 00:04:06.964 sys 0m4.741s 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.964 15:55:42 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:06.964 ************************************ 00:04:06.964 END TEST no_shrink_alloc 00:04:06.964 ************************************ 00:04:06.964 15:55:42 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:06.964 15:55:42 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:06.964 15:55:42 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:06.964 15:55:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.964 15:55:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.964 15:55:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.964 15:55:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.964 15:55:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.964 15:55:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.964 15:55:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.964 15:55:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.964 15:55:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.964 15:55:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.964 15:55:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:06.964 15:55:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:06.964 00:04:06.964 real 0m26.473s 00:04:06.964 user 0m10.148s 00:04:06.964 sys 0m16.551s 00:04:06.964 15:55:42 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.964 15:55:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.964 ************************************ 00:04:06.964 END TEST hugepages 00:04:06.964 ************************************ 00:04:07.226 15:55:42 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:07.226 15:55:42 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:07.226 15:55:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.226 15:55:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.226 15:55:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.226 ************************************ 00:04:07.226 START TEST driver 00:04:07.226 ************************************ 00:04:07.226 15:55:42 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:07.226 * Looking for test storage... 00:04:07.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:07.226 15:55:42 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:07.226 15:55:42 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.226 15:55:42 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.515 15:55:47 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:12.515 15:55:47 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.515 15:55:47 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.515 15:55:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:12.515 ************************************ 00:04:12.515 START TEST guess_driver 00:04:12.515 ************************************ 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:12.515 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:12.515 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:12.515 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:12.515 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:12.515 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:12.515 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:12.515 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:12.515 Looking for driver=vfio-pci 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.515 15:55:47 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:15.060 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.060 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.060 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.060 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.060 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.060 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.060 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.060 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.060 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.319 15:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.319 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.890 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:15.890 15:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:15.890 15:55:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.890 15:55:51 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.233 00:04:21.233 real 0m8.362s 00:04:21.233 user 0m2.655s 00:04:21.233 sys 0m4.835s 00:04:21.233 15:55:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.233 15:55:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:21.233 ************************************ 00:04:21.233 END TEST guess_driver 00:04:21.233 ************************************ 00:04:21.233 15:55:56 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:21.233 00:04:21.233 real 0m13.427s 00:04:21.233 user 0m4.187s 00:04:21.233 sys 0m7.585s 00:04:21.233 15:55:56 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.233 15:55:56 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:21.233 ************************************ 00:04:21.233 END TEST driver 00:04:21.233 ************************************ 00:04:21.233 15:55:56 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:21.233 15:55:56 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:21.233 15:55:56 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.233 15:55:56 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.233 15:55:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:21.233 ************************************ 00:04:21.233 START TEST devices 00:04:21.233 ************************************ 00:04:21.233 15:55:56 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:21.233 * Looking for test storage... 00:04:21.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:21.233 15:55:56 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:21.233 15:55:56 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:21.233 15:55:56 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.233 15:55:56 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.534 15:56:00 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:24.534 15:56:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:24.534 15:56:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:24.534 15:56:00 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:24.535 15:56:00 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:24.535 15:56:00 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:24.535 15:56:00 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:24.535 15:56:00 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:24.535 15:56:00 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:24.535 15:56:00 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:24.535 15:56:00 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:24.535 15:56:00 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:24.535 15:56:00 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:24.535 15:56:00 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:24.535 15:56:00 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:24.535 15:56:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:24.535 15:56:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:24.535 15:56:00 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:24.535 15:56:00 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:24.535 15:56:00 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:24.535 15:56:00 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:24.535 15:56:00 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:24.796 No valid GPT data, bailing 00:04:24.797 15:56:00 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:24.797 15:56:00 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:24.797 15:56:00 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:24.797 15:56:00 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:24.797 15:56:00 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:24.797 15:56:00 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:24.797 15:56:00 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:24.797 15:56:00 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:24.797 15:56:00 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:24.797 15:56:00 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:24.797 15:56:00 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:24.797 15:56:00 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:24.797 15:56:00 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:24.797 15:56:00 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.797 15:56:00 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.797 15:56:00 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:24.797 ************************************ 00:04:24.797 START TEST nvme_mount 00:04:24.797 ************************************ 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:24.797 15:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:25.762 Creating new GPT entries in memory. 00:04:25.762 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:25.762 other utilities. 00:04:25.762 15:56:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:25.762 15:56:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.762 15:56:01 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.762 15:56:01 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.762 15:56:01 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:26.705 Creating new GPT entries in memory. 00:04:26.705 The operation has completed successfully. 00:04:26.705 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:26.705 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.705 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2049619 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.966 15:56:02 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.271 15:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.532 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.532 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:30.532 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.533 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.533 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.533 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:30.533 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.533 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.533 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:30.533 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:30.533 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:30.533 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:30.533 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:30.794 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:30.794 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:30.794 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:30.794 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.794 15:56:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.099 15:56:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.360 15:56:10 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.360 15:56:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:34.361 15:56:10 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.361 15:56:10 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.361 15:56:10 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:34.361 15:56:10 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.361 15:56:10 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:34.361 15:56:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:34.361 15:56:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:34.361 15:56:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:34.361 15:56:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:34.361 15:56:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:34.361 15:56:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:34.361 15:56:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:34.361 15:56:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.361 15:56:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:34.361 15:56:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:34.361 15:56:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.361 15:56:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.665 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.237 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.237 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:38.237 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:38.237 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:38.237 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.237 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.237 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.237 15:56:13 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:38.237 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.237 00:04:38.237 real 0m13.342s 00:04:38.237 user 0m4.085s 00:04:38.237 sys 0m7.114s 00:04:38.237 15:56:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.237 15:56:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:38.237 ************************************ 00:04:38.237 END TEST nvme_mount 00:04:38.237 ************************************ 00:04:38.237 15:56:13 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:38.237 15:56:13 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:38.237 15:56:13 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.237 15:56:13 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.237 15:56:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:38.237 ************************************ 00:04:38.237 START TEST dm_mount 00:04:38.237 ************************************ 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:38.237 15:56:13 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:39.181 Creating new GPT entries in memory. 00:04:39.181 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:39.181 other utilities. 00:04:39.181 15:56:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:39.181 15:56:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.181 15:56:14 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:39.182 15:56:14 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:39.182 15:56:14 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:40.126 Creating new GPT entries in memory. 00:04:40.126 The operation has completed successfully. 00:04:40.126 15:56:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:40.126 15:56:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.126 15:56:15 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.126 15:56:15 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.126 15:56:15 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:41.539 The operation has completed successfully. 00:04:41.539 15:56:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:41.539 15:56:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.539 15:56:16 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2054625 00:04:41.539 15:56:16 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:41.539 15:56:16 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.539 15:56:16 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.539 15:56:16 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.539 15:56:17 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.836 15:56:20 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.140 15:56:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.713 15:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:48.713 15:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:48.713 15:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:48.713 15:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:48.713 15:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.713 15:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:48.713 15:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:48.713 15:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.713 15:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:48.713 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:48.713 15:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:48.713 15:56:24 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:48.713 00:04:48.713 real 0m10.427s 00:04:48.713 user 0m2.724s 00:04:48.713 sys 0m4.749s 00:04:48.713 15:56:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.713 15:56:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:48.713 ************************************ 00:04:48.713 END TEST dm_mount 00:04:48.713 ************************************ 00:04:48.713 15:56:24 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:48.713 15:56:24 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:48.713 15:56:24 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:48.713 15:56:24 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.713 15:56:24 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.713 15:56:24 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:48.713 15:56:24 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.713 15:56:24 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:48.974 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:48.974 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:48.974 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:48.974 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:48.974 15:56:24 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:48.974 15:56:24 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.974 15:56:24 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:48.974 15:56:24 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.974 15:56:24 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:48.974 15:56:24 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.974 15:56:24 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:48.974 00:04:48.974 real 0m28.268s 00:04:48.974 user 0m8.421s 00:04:48.974 sys 0m14.617s 00:04:48.974 15:56:24 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.974 15:56:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:48.974 ************************************ 00:04:48.974 END TEST devices 00:04:48.974 ************************************ 00:04:48.974 15:56:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:48.974 00:04:48.974 real 1m33.270s 00:04:48.974 user 0m30.776s 00:04:48.974 sys 0m53.442s 00:04:48.974 15:56:24 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.974 15:56:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:48.974 ************************************ 00:04:48.974 END TEST setup.sh 00:04:48.974 ************************************ 00:04:48.974 15:56:24 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.974 15:56:24 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:52.276 Hugepages 00:04:52.276 node hugesize free / total 00:04:52.276 node0 1048576kB 0 / 0 00:04:52.276 node0 2048kB 2048 / 2048 00:04:52.276 node1 1048576kB 0 / 0 00:04:52.276 node1 2048kB 0 / 0 00:04:52.276 00:04:52.276 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:52.276 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:52.276 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:52.276 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:52.276 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:52.276 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:52.276 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:52.276 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:52.276 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:52.276 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:52.276 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:52.276 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:52.276 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:52.276 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:52.276 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:52.276 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:52.276 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:52.276 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:52.276 15:56:28 -- spdk/autotest.sh@130 -- # uname -s 00:04:52.538 15:56:28 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:52.538 15:56:28 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:52.538 15:56:28 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:55.850 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:55.850 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:55.850 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:55.850 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:55.850 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:55.850 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:55.850 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:55.850 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:55.850 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:55.850 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:55.850 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:55.850 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:55.850 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:55.850 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:55.850 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:55.850 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:57.761 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:58.021 15:56:33 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:58.963 15:56:34 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:58.963 15:56:34 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:58.963 15:56:34 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:58.963 15:56:34 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:58.963 15:56:34 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:58.963 15:56:34 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:58.963 15:56:34 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.963 15:56:34 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.963 15:56:34 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:58.963 15:56:34 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:58.963 15:56:34 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:04:58.963 15:56:34 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.261 Waiting for block devices as requested 00:05:02.261 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:02.261 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:02.521 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:02.521 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:02.521 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:02.782 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:02.782 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:02.782 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:03.042 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:03.042 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:03.303 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:03.303 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:03.303 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:03.303 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:03.571 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:03.571 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:03.571 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:03.903 15:56:39 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:03.903 15:56:39 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:03.903 15:56:39 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:03.903 15:56:39 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:05:03.903 15:56:39 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:03.903 15:56:39 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:03.903 15:56:39 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:03.903 15:56:39 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:03.903 15:56:39 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:03.903 15:56:39 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:03.903 15:56:39 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:03.903 15:56:39 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:03.903 15:56:39 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:03.903 15:56:39 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:05:03.903 15:56:39 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:03.903 15:56:39 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:03.903 15:56:39 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:03.903 15:56:39 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:03.903 15:56:39 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:03.903 15:56:39 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:03.903 15:56:39 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:03.903 15:56:39 -- common/autotest_common.sh@1557 -- # continue 00:05:03.903 15:56:39 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:03.903 15:56:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.903 15:56:39 -- common/autotest_common.sh@10 -- # set +x 00:05:03.903 15:56:39 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:03.903 15:56:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.903 15:56:39 -- common/autotest_common.sh@10 -- # set +x 00:05:03.903 15:56:39 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:07.206 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:07.206 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:07.206 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:07.206 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:07.206 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:07.206 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:07.206 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:07.206 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:07.206 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:07.206 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:07.467 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:07.467 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:07.467 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:07.467 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:07.467 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:07.467 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:07.467 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:07.728 15:56:43 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:07.728 15:56:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.728 15:56:43 -- common/autotest_common.sh@10 -- # set +x 00:05:07.728 15:56:43 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:07.728 15:56:43 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:07.728 15:56:43 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:07.728 15:56:43 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:07.728 15:56:43 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:07.728 15:56:43 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:07.728 15:56:43 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:07.728 15:56:43 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:07.728 15:56:43 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.728 15:56:43 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:07.728 15:56:43 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:07.988 15:56:43 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:07.988 15:56:43 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:07.988 15:56:43 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:07.988 15:56:43 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:07.988 15:56:43 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:05:07.988 15:56:43 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:07.988 15:56:43 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:07.988 15:56:43 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:07.988 15:56:43 -- common/autotest_common.sh@1593 -- # return 0 00:05:07.988 15:56:43 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:07.988 15:56:43 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:07.988 15:56:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:07.988 15:56:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:07.988 15:56:43 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:07.988 15:56:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:07.988 15:56:43 -- common/autotest_common.sh@10 -- # set +x 00:05:07.988 15:56:43 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:07.988 15:56:43 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:07.988 15:56:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.988 15:56:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.988 15:56:43 -- common/autotest_common.sh@10 -- # set +x 00:05:07.988 ************************************ 00:05:07.988 START TEST env 00:05:07.988 ************************************ 00:05:07.988 15:56:43 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:07.988 * Looking for test storage... 00:05:07.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:07.988 15:56:43 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:07.988 15:56:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.988 15:56:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.988 15:56:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.988 ************************************ 00:05:07.988 START TEST env_memory 00:05:07.988 ************************************ 00:05:07.988 15:56:43 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:07.988 00:05:07.988 00:05:07.988 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.988 http://cunit.sourceforge.net/ 00:05:07.988 00:05:07.988 00:05:07.988 Suite: memory 00:05:08.249 Test: alloc and free memory map ...[2024-07-15 15:56:43.846043] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:08.249 passed 00:05:08.249 Test: mem map translation ...[2024-07-15 15:56:43.871639] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:08.249 [2024-07-15 15:56:43.871673] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:08.249 [2024-07-15 15:56:43.871721] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:08.249 [2024-07-15 15:56:43.871729] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:08.249 passed 00:05:08.249 Test: mem map registration ...[2024-07-15 15:56:43.927120] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:08.249 [2024-07-15 15:56:43.927149] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:08.249 passed 00:05:08.249 Test: mem map adjacent registrations ...passed 00:05:08.249 00:05:08.249 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.249 suites 1 1 n/a 0 0 00:05:08.249 tests 4 4 4 0 0 00:05:08.249 asserts 152 152 152 0 n/a 00:05:08.249 00:05:08.249 Elapsed time = 0.192 seconds 00:05:08.249 00:05:08.249 real 0m0.207s 00:05:08.249 user 0m0.191s 00:05:08.249 sys 0m0.014s 00:05:08.249 15:56:44 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.249 15:56:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:08.249 ************************************ 00:05:08.249 END TEST env_memory 00:05:08.249 ************************************ 00:05:08.249 15:56:44 env -- common/autotest_common.sh@1142 -- # return 0 00:05:08.249 15:56:44 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:08.249 15:56:44 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.249 15:56:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.249 15:56:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.249 ************************************ 00:05:08.249 START TEST env_vtophys 00:05:08.249 ************************************ 00:05:08.249 15:56:44 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:08.510 EAL: lib.eal log level changed from notice to debug 00:05:08.510 EAL: Detected lcore 0 as core 0 on socket 0 00:05:08.510 EAL: Detected lcore 1 as core 1 on socket 0 00:05:08.510 EAL: Detected lcore 2 as core 2 on socket 0 00:05:08.510 EAL: Detected lcore 3 as core 3 on socket 0 00:05:08.510 EAL: Detected lcore 4 as core 4 on socket 0 00:05:08.510 EAL: Detected lcore 5 as core 5 on socket 0 00:05:08.511 EAL: Detected lcore 6 as core 6 on socket 0 00:05:08.511 EAL: Detected lcore 7 as core 7 on socket 0 00:05:08.511 EAL: Detected lcore 8 as core 8 on socket 0 00:05:08.511 EAL: Detected lcore 9 as core 9 on socket 0 00:05:08.511 EAL: Detected lcore 10 as core 10 on socket 0 00:05:08.511 EAL: Detected lcore 11 as core 11 on socket 0 00:05:08.511 EAL: Detected lcore 12 as core 12 on socket 0 00:05:08.511 EAL: Detected lcore 13 as core 13 on socket 0 00:05:08.511 EAL: Detected lcore 14 as core 14 on socket 0 00:05:08.511 EAL: Detected lcore 15 as core 15 on socket 0 00:05:08.511 EAL: Detected lcore 16 as core 16 on socket 0 00:05:08.511 EAL: Detected lcore 17 as core 17 on socket 0 00:05:08.511 EAL: Detected lcore 18 as core 18 on socket 0 00:05:08.511 EAL: Detected lcore 19 as core 19 on socket 0 00:05:08.511 EAL: Detected lcore 20 as core 20 on socket 0 00:05:08.511 EAL: Detected lcore 21 as core 21 on socket 0 00:05:08.511 EAL: Detected lcore 22 as core 22 on socket 0 00:05:08.511 EAL: Detected lcore 23 as core 23 on socket 0 00:05:08.511 EAL: Detected lcore 24 as core 24 on socket 0 00:05:08.511 EAL: Detected lcore 25 as core 25 on socket 0 00:05:08.511 EAL: Detected lcore 26 as core 26 on socket 0 00:05:08.511 EAL: Detected lcore 27 as core 27 on socket 0 00:05:08.511 EAL: Detected lcore 28 as core 28 on socket 0 00:05:08.511 EAL: Detected lcore 29 as core 29 on socket 0 00:05:08.511 EAL: Detected lcore 30 as core 30 on socket 0 00:05:08.511 EAL: Detected lcore 31 as core 31 on socket 0 00:05:08.511 EAL: Detected lcore 32 as core 32 on socket 0 00:05:08.511 EAL: Detected lcore 33 as core 33 on socket 0 00:05:08.511 EAL: Detected lcore 34 as core 34 on socket 0 00:05:08.511 EAL: Detected lcore 35 as core 35 on socket 0 00:05:08.511 EAL: Detected lcore 36 as core 0 on socket 1 00:05:08.511 EAL: Detected lcore 37 as core 1 on socket 1 00:05:08.511 EAL: Detected lcore 38 as core 2 on socket 1 00:05:08.511 EAL: Detected lcore 39 as core 3 on socket 1 00:05:08.511 EAL: Detected lcore 40 as core 4 on socket 1 00:05:08.511 EAL: Detected lcore 41 as core 5 on socket 1 00:05:08.511 EAL: Detected lcore 42 as core 6 on socket 1 00:05:08.511 EAL: Detected lcore 43 as core 7 on socket 1 00:05:08.511 EAL: Detected lcore 44 as core 8 on socket 1 00:05:08.511 EAL: Detected lcore 45 as core 9 on socket 1 00:05:08.511 EAL: Detected lcore 46 as core 10 on socket 1 00:05:08.511 EAL: Detected lcore 47 as core 11 on socket 1 00:05:08.511 EAL: Detected lcore 48 as core 12 on socket 1 00:05:08.511 EAL: Detected lcore 49 as core 13 on socket 1 00:05:08.511 EAL: Detected lcore 50 as core 14 on socket 1 00:05:08.511 EAL: Detected lcore 51 as core 15 on socket 1 00:05:08.511 EAL: Detected lcore 52 as core 16 on socket 1 00:05:08.511 EAL: Detected lcore 53 as core 17 on socket 1 00:05:08.511 EAL: Detected lcore 54 as core 18 on socket 1 00:05:08.511 EAL: Detected lcore 55 as core 19 on socket 1 00:05:08.511 EAL: Detected lcore 56 as core 20 on socket 1 00:05:08.511 EAL: Detected lcore 57 as core 21 on socket 1 00:05:08.511 EAL: Detected lcore 58 as core 22 on socket 1 00:05:08.511 EAL: Detected lcore 59 as core 23 on socket 1 00:05:08.511 EAL: Detected lcore 60 as core 24 on socket 1 00:05:08.511 EAL: Detected lcore 61 as core 25 on socket 1 00:05:08.511 EAL: Detected lcore 62 as core 26 on socket 1 00:05:08.511 EAL: Detected lcore 63 as core 27 on socket 1 00:05:08.511 EAL: Detected lcore 64 as core 28 on socket 1 00:05:08.511 EAL: Detected lcore 65 as core 29 on socket 1 00:05:08.511 EAL: Detected lcore 66 as core 30 on socket 1 00:05:08.511 EAL: Detected lcore 67 as core 31 on socket 1 00:05:08.511 EAL: Detected lcore 68 as core 32 on socket 1 00:05:08.511 EAL: Detected lcore 69 as core 33 on socket 1 00:05:08.511 EAL: Detected lcore 70 as core 34 on socket 1 00:05:08.511 EAL: Detected lcore 71 as core 35 on socket 1 00:05:08.511 EAL: Detected lcore 72 as core 0 on socket 0 00:05:08.511 EAL: Detected lcore 73 as core 1 on socket 0 00:05:08.511 EAL: Detected lcore 74 as core 2 on socket 0 00:05:08.511 EAL: Detected lcore 75 as core 3 on socket 0 00:05:08.511 EAL: Detected lcore 76 as core 4 on socket 0 00:05:08.511 EAL: Detected lcore 77 as core 5 on socket 0 00:05:08.511 EAL: Detected lcore 78 as core 6 on socket 0 00:05:08.511 EAL: Detected lcore 79 as core 7 on socket 0 00:05:08.511 EAL: Detected lcore 80 as core 8 on socket 0 00:05:08.511 EAL: Detected lcore 81 as core 9 on socket 0 00:05:08.511 EAL: Detected lcore 82 as core 10 on socket 0 00:05:08.511 EAL: Detected lcore 83 as core 11 on socket 0 00:05:08.511 EAL: Detected lcore 84 as core 12 on socket 0 00:05:08.511 EAL: Detected lcore 85 as core 13 on socket 0 00:05:08.511 EAL: Detected lcore 86 as core 14 on socket 0 00:05:08.511 EAL: Detected lcore 87 as core 15 on socket 0 00:05:08.511 EAL: Detected lcore 88 as core 16 on socket 0 00:05:08.511 EAL: Detected lcore 89 as core 17 on socket 0 00:05:08.511 EAL: Detected lcore 90 as core 18 on socket 0 00:05:08.511 EAL: Detected lcore 91 as core 19 on socket 0 00:05:08.511 EAL: Detected lcore 92 as core 20 on socket 0 00:05:08.511 EAL: Detected lcore 93 as core 21 on socket 0 00:05:08.511 EAL: Detected lcore 94 as core 22 on socket 0 00:05:08.511 EAL: Detected lcore 95 as core 23 on socket 0 00:05:08.511 EAL: Detected lcore 96 as core 24 on socket 0 00:05:08.511 EAL: Detected lcore 97 as core 25 on socket 0 00:05:08.511 EAL: Detected lcore 98 as core 26 on socket 0 00:05:08.511 EAL: Detected lcore 99 as core 27 on socket 0 00:05:08.511 EAL: Detected lcore 100 as core 28 on socket 0 00:05:08.511 EAL: Detected lcore 101 as core 29 on socket 0 00:05:08.511 EAL: Detected lcore 102 as core 30 on socket 0 00:05:08.511 EAL: Detected lcore 103 as core 31 on socket 0 00:05:08.511 EAL: Detected lcore 104 as core 32 on socket 0 00:05:08.511 EAL: Detected lcore 105 as core 33 on socket 0 00:05:08.511 EAL: Detected lcore 106 as core 34 on socket 0 00:05:08.511 EAL: Detected lcore 107 as core 35 on socket 0 00:05:08.511 EAL: Detected lcore 108 as core 0 on socket 1 00:05:08.511 EAL: Detected lcore 109 as core 1 on socket 1 00:05:08.511 EAL: Detected lcore 110 as core 2 on socket 1 00:05:08.511 EAL: Detected lcore 111 as core 3 on socket 1 00:05:08.511 EAL: Detected lcore 112 as core 4 on socket 1 00:05:08.511 EAL: Detected lcore 113 as core 5 on socket 1 00:05:08.511 EAL: Detected lcore 114 as core 6 on socket 1 00:05:08.511 EAL: Detected lcore 115 as core 7 on socket 1 00:05:08.511 EAL: Detected lcore 116 as core 8 on socket 1 00:05:08.511 EAL: Detected lcore 117 as core 9 on socket 1 00:05:08.511 EAL: Detected lcore 118 as core 10 on socket 1 00:05:08.511 EAL: Detected lcore 119 as core 11 on socket 1 00:05:08.511 EAL: Detected lcore 120 as core 12 on socket 1 00:05:08.511 EAL: Detected lcore 121 as core 13 on socket 1 00:05:08.511 EAL: Detected lcore 122 as core 14 on socket 1 00:05:08.511 EAL: Detected lcore 123 as core 15 on socket 1 00:05:08.511 EAL: Detected lcore 124 as core 16 on socket 1 00:05:08.511 EAL: Detected lcore 125 as core 17 on socket 1 00:05:08.511 EAL: Detected lcore 126 as core 18 on socket 1 00:05:08.511 EAL: Detected lcore 127 as core 19 on socket 1 00:05:08.511 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:08.511 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:08.511 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:08.511 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:08.511 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:08.511 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:08.511 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:08.511 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:08.511 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:08.511 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:08.511 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:08.511 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:08.511 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:08.511 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:08.511 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:08.511 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:08.511 EAL: Maximum logical cores by configuration: 128 00:05:08.511 EAL: Detected CPU lcores: 128 00:05:08.511 EAL: Detected NUMA nodes: 2 00:05:08.511 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:08.511 EAL: Detected shared linkage of DPDK 00:05:08.511 EAL: No shared files mode enabled, IPC will be disabled 00:05:08.511 EAL: Bus pci wants IOVA as 'DC' 00:05:08.511 EAL: Buses did not request a specific IOVA mode. 00:05:08.511 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:08.511 EAL: Selected IOVA mode 'VA' 00:05:08.511 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.511 EAL: Probing VFIO support... 00:05:08.511 EAL: IOMMU type 1 (Type 1) is supported 00:05:08.511 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:08.511 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:08.511 EAL: VFIO support initialized 00:05:08.511 EAL: Ask a virtual area of 0x2e000 bytes 00:05:08.511 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:08.511 EAL: Setting up physically contiguous memory... 00:05:08.511 EAL: Setting maximum number of open files to 524288 00:05:08.511 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:08.511 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:08.511 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:08.511 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.511 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:08.511 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.511 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.511 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:08.511 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:08.511 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.511 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:08.511 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.511 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.511 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:08.511 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:08.511 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.511 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:08.511 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.511 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.511 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:08.511 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:08.511 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.511 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:08.511 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.511 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.511 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:08.511 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:08.511 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:08.511 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.511 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:08.511 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.511 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.511 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:08.511 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:08.511 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.512 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:08.512 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.512 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.512 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:08.512 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:08.512 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.512 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:08.512 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.512 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.512 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:08.512 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:08.512 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.512 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:08.512 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.512 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.512 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:08.512 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:08.512 EAL: Hugepages will be freed exactly as allocated. 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: TSC frequency is ~2400000 KHz 00:05:08.512 EAL: Main lcore 0 is ready (tid=7f4ec0165a00;cpuset=[0]) 00:05:08.512 EAL: Trying to obtain current memory policy. 00:05:08.512 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.512 EAL: Restoring previous memory policy: 0 00:05:08.512 EAL: request: mp_malloc_sync 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: Heap on socket 0 was expanded by 2MB 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:08.512 EAL: Mem event callback 'spdk:(nil)' registered 00:05:08.512 00:05:08.512 00:05:08.512 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.512 http://cunit.sourceforge.net/ 00:05:08.512 00:05:08.512 00:05:08.512 Suite: components_suite 00:05:08.512 Test: vtophys_malloc_test ...passed 00:05:08.512 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:08.512 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.512 EAL: Restoring previous memory policy: 4 00:05:08.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.512 EAL: request: mp_malloc_sync 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: Heap on socket 0 was expanded by 4MB 00:05:08.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.512 EAL: request: mp_malloc_sync 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: Heap on socket 0 was shrunk by 4MB 00:05:08.512 EAL: Trying to obtain current memory policy. 00:05:08.512 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.512 EAL: Restoring previous memory policy: 4 00:05:08.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.512 EAL: request: mp_malloc_sync 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: Heap on socket 0 was expanded by 6MB 00:05:08.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.512 EAL: request: mp_malloc_sync 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: Heap on socket 0 was shrunk by 6MB 00:05:08.512 EAL: Trying to obtain current memory policy. 00:05:08.512 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.512 EAL: Restoring previous memory policy: 4 00:05:08.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.512 EAL: request: mp_malloc_sync 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: Heap on socket 0 was expanded by 10MB 00:05:08.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.512 EAL: request: mp_malloc_sync 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: Heap on socket 0 was shrunk by 10MB 00:05:08.512 EAL: Trying to obtain current memory policy. 00:05:08.512 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.512 EAL: Restoring previous memory policy: 4 00:05:08.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.512 EAL: request: mp_malloc_sync 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: Heap on socket 0 was expanded by 18MB 00:05:08.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.512 EAL: request: mp_malloc_sync 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: Heap on socket 0 was shrunk by 18MB 00:05:08.512 EAL: Trying to obtain current memory policy. 00:05:08.512 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.512 EAL: Restoring previous memory policy: 4 00:05:08.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.512 EAL: request: mp_malloc_sync 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: Heap on socket 0 was expanded by 34MB 00:05:08.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.512 EAL: request: mp_malloc_sync 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: Heap on socket 0 was shrunk by 34MB 00:05:08.512 EAL: Trying to obtain current memory policy. 00:05:08.512 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.512 EAL: Restoring previous memory policy: 4 00:05:08.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.512 EAL: request: mp_malloc_sync 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: Heap on socket 0 was expanded by 66MB 00:05:08.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.512 EAL: request: mp_malloc_sync 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: Heap on socket 0 was shrunk by 66MB 00:05:08.512 EAL: Trying to obtain current memory policy. 00:05:08.512 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.512 EAL: Restoring previous memory policy: 4 00:05:08.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.512 EAL: request: mp_malloc_sync 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: Heap on socket 0 was expanded by 130MB 00:05:08.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.512 EAL: request: mp_malloc_sync 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: Heap on socket 0 was shrunk by 130MB 00:05:08.512 EAL: Trying to obtain current memory policy. 00:05:08.512 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.512 EAL: Restoring previous memory policy: 4 00:05:08.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.512 EAL: request: mp_malloc_sync 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: Heap on socket 0 was expanded by 258MB 00:05:08.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.512 EAL: request: mp_malloc_sync 00:05:08.512 EAL: No shared files mode enabled, IPC is disabled 00:05:08.512 EAL: Heap on socket 0 was shrunk by 258MB 00:05:08.512 EAL: Trying to obtain current memory policy. 00:05:08.512 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.780 EAL: Restoring previous memory policy: 4 00:05:08.780 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.780 EAL: request: mp_malloc_sync 00:05:08.780 EAL: No shared files mode enabled, IPC is disabled 00:05:08.780 EAL: Heap on socket 0 was expanded by 514MB 00:05:08.780 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.780 EAL: request: mp_malloc_sync 00:05:08.780 EAL: No shared files mode enabled, IPC is disabled 00:05:08.780 EAL: Heap on socket 0 was shrunk by 514MB 00:05:08.780 EAL: Trying to obtain current memory policy. 00:05:08.780 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.040 EAL: Restoring previous memory policy: 4 00:05:09.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.040 EAL: request: mp_malloc_sync 00:05:09.040 EAL: No shared files mode enabled, IPC is disabled 00:05:09.040 EAL: Heap on socket 0 was expanded by 1026MB 00:05:09.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.040 EAL: request: mp_malloc_sync 00:05:09.040 EAL: No shared files mode enabled, IPC is disabled 00:05:09.040 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:09.040 passed 00:05:09.040 00:05:09.040 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.040 suites 1 1 n/a 0 0 00:05:09.040 tests 2 2 2 0 0 00:05:09.040 asserts 497 497 497 0 n/a 00:05:09.040 00:05:09.040 Elapsed time = 0.657 seconds 00:05:09.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.040 EAL: request: mp_malloc_sync 00:05:09.040 EAL: No shared files mode enabled, IPC is disabled 00:05:09.040 EAL: Heap on socket 0 was shrunk by 2MB 00:05:09.040 EAL: No shared files mode enabled, IPC is disabled 00:05:09.040 EAL: No shared files mode enabled, IPC is disabled 00:05:09.040 EAL: No shared files mode enabled, IPC is disabled 00:05:09.040 00:05:09.040 real 0m0.776s 00:05:09.040 user 0m0.408s 00:05:09.040 sys 0m0.344s 00:05:09.040 15:56:44 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.040 15:56:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:09.040 ************************************ 00:05:09.040 END TEST env_vtophys 00:05:09.040 ************************************ 00:05:09.301 15:56:44 env -- common/autotest_common.sh@1142 -- # return 0 00:05:09.301 15:56:44 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:09.301 15:56:44 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.301 15:56:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.301 15:56:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.301 ************************************ 00:05:09.301 START TEST env_pci 00:05:09.301 ************************************ 00:05:09.301 15:56:44 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:09.301 00:05:09.301 00:05:09.301 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.301 http://cunit.sourceforge.net/ 00:05:09.301 00:05:09.301 00:05:09.301 Suite: pci 00:05:09.301 Test: pci_hook ...[2024-07-15 15:56:44.947756] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2065915 has claimed it 00:05:09.301 EAL: Cannot find device (10000:00:01.0) 00:05:09.301 EAL: Failed to attach device on primary process 00:05:09.301 passed 00:05:09.301 00:05:09.301 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.301 suites 1 1 n/a 0 0 00:05:09.301 tests 1 1 1 0 0 00:05:09.301 asserts 25 25 25 0 n/a 00:05:09.301 00:05:09.301 Elapsed time = 0.029 seconds 00:05:09.301 00:05:09.301 real 0m0.048s 00:05:09.301 user 0m0.012s 00:05:09.301 sys 0m0.036s 00:05:09.301 15:56:44 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.301 15:56:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:09.301 ************************************ 00:05:09.301 END TEST env_pci 00:05:09.301 ************************************ 00:05:09.301 15:56:45 env -- common/autotest_common.sh@1142 -- # return 0 00:05:09.301 15:56:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:09.301 15:56:45 env -- env/env.sh@15 -- # uname 00:05:09.301 15:56:45 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:09.301 15:56:45 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:09.301 15:56:45 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:09.301 15:56:45 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:09.301 15:56:45 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.301 15:56:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.301 ************************************ 00:05:09.301 START TEST env_dpdk_post_init 00:05:09.301 ************************************ 00:05:09.301 15:56:45 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:09.301 EAL: Detected CPU lcores: 128 00:05:09.301 EAL: Detected NUMA nodes: 2 00:05:09.301 EAL: Detected shared linkage of DPDK 00:05:09.301 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:09.301 EAL: Selected IOVA mode 'VA' 00:05:09.301 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.301 EAL: VFIO support initialized 00:05:09.301 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:09.561 EAL: Using IOMMU type 1 (Type 1) 00:05:09.561 EAL: Ignore mapping IO port bar(1) 00:05:09.822 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:09.822 EAL: Ignore mapping IO port bar(1) 00:05:09.822 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:10.082 EAL: Ignore mapping IO port bar(1) 00:05:10.082 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:10.342 EAL: Ignore mapping IO port bar(1) 00:05:10.342 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:10.603 EAL: Ignore mapping IO port bar(1) 00:05:10.603 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:10.603 EAL: Ignore mapping IO port bar(1) 00:05:10.863 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:10.863 EAL: Ignore mapping IO port bar(1) 00:05:11.123 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:11.123 EAL: Ignore mapping IO port bar(1) 00:05:11.383 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:11.383 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:11.643 EAL: Ignore mapping IO port bar(1) 00:05:11.643 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:11.904 EAL: Ignore mapping IO port bar(1) 00:05:11.904 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:12.165 EAL: Ignore mapping IO port bar(1) 00:05:12.165 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:12.165 EAL: Ignore mapping IO port bar(1) 00:05:12.425 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:12.425 EAL: Ignore mapping IO port bar(1) 00:05:12.685 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:12.685 EAL: Ignore mapping IO port bar(1) 00:05:12.946 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:12.946 EAL: Ignore mapping IO port bar(1) 00:05:12.946 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:13.207 EAL: Ignore mapping IO port bar(1) 00:05:13.207 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:13.207 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:13.207 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:13.467 Starting DPDK initialization... 00:05:13.467 Starting SPDK post initialization... 00:05:13.467 SPDK NVMe probe 00:05:13.467 Attaching to 0000:65:00.0 00:05:13.467 Attached to 0000:65:00.0 00:05:13.467 Cleaning up... 00:05:15.383 00:05:15.383 real 0m5.721s 00:05:15.383 user 0m0.191s 00:05:15.383 sys 0m0.076s 00:05:15.383 15:56:50 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.383 15:56:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.383 ************************************ 00:05:15.383 END TEST env_dpdk_post_init 00:05:15.383 ************************************ 00:05:15.383 15:56:50 env -- common/autotest_common.sh@1142 -- # return 0 00:05:15.383 15:56:50 env -- env/env.sh@26 -- # uname 00:05:15.383 15:56:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:15.383 15:56:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.383 15:56:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.383 15:56:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.383 15:56:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.383 ************************************ 00:05:15.383 START TEST env_mem_callbacks 00:05:15.383 ************************************ 00:05:15.383 15:56:50 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.383 EAL: Detected CPU lcores: 128 00:05:15.383 EAL: Detected NUMA nodes: 2 00:05:15.383 EAL: Detected shared linkage of DPDK 00:05:15.383 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:15.383 EAL: Selected IOVA mode 'VA' 00:05:15.383 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.383 EAL: VFIO support initialized 00:05:15.383 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:15.383 00:05:15.383 00:05:15.383 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.383 http://cunit.sourceforge.net/ 00:05:15.383 00:05:15.383 00:05:15.383 Suite: memory 00:05:15.383 Test: test ... 00:05:15.383 register 0x200000200000 2097152 00:05:15.383 malloc 3145728 00:05:15.383 register 0x200000400000 4194304 00:05:15.383 buf 0x200000500000 len 3145728 PASSED 00:05:15.383 malloc 64 00:05:15.383 buf 0x2000004fff40 len 64 PASSED 00:05:15.383 malloc 4194304 00:05:15.383 register 0x200000800000 6291456 00:05:15.383 buf 0x200000a00000 len 4194304 PASSED 00:05:15.383 free 0x200000500000 3145728 00:05:15.383 free 0x2000004fff40 64 00:05:15.383 unregister 0x200000400000 4194304 PASSED 00:05:15.383 free 0x200000a00000 4194304 00:05:15.383 unregister 0x200000800000 6291456 PASSED 00:05:15.383 malloc 8388608 00:05:15.383 register 0x200000400000 10485760 00:05:15.383 buf 0x200000600000 len 8388608 PASSED 00:05:15.383 free 0x200000600000 8388608 00:05:15.383 unregister 0x200000400000 10485760 PASSED 00:05:15.383 passed 00:05:15.383 00:05:15.383 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.383 suites 1 1 n/a 0 0 00:05:15.383 tests 1 1 1 0 0 00:05:15.383 asserts 15 15 15 0 n/a 00:05:15.383 00:05:15.383 Elapsed time = 0.007 seconds 00:05:15.383 00:05:15.383 real 0m0.061s 00:05:15.383 user 0m0.018s 00:05:15.383 sys 0m0.042s 00:05:15.383 15:56:50 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.383 15:56:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:15.383 ************************************ 00:05:15.383 END TEST env_mem_callbacks 00:05:15.383 ************************************ 00:05:15.383 15:56:50 env -- common/autotest_common.sh@1142 -- # return 0 00:05:15.383 00:05:15.383 real 0m7.312s 00:05:15.383 user 0m0.997s 00:05:15.383 sys 0m0.864s 00:05:15.383 15:56:50 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.383 15:56:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.383 ************************************ 00:05:15.383 END TEST env 00:05:15.383 ************************************ 00:05:15.383 15:56:51 -- common/autotest_common.sh@1142 -- # return 0 00:05:15.383 15:56:51 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:15.383 15:56:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.383 15:56:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.383 15:56:51 -- common/autotest_common.sh@10 -- # set +x 00:05:15.383 ************************************ 00:05:15.383 START TEST rpc 00:05:15.383 ************************************ 00:05:15.383 15:56:51 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:15.383 * Looking for test storage... 00:05:15.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:15.383 15:56:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2067361 00:05:15.383 15:56:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.383 15:56:51 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:15.383 15:56:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2067361 00:05:15.383 15:56:51 rpc -- common/autotest_common.sh@829 -- # '[' -z 2067361 ']' 00:05:15.383 15:56:51 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.383 15:56:51 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.383 15:56:51 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.383 15:56:51 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.383 15:56:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.383 [2024-07-15 15:56:51.218504] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:05:15.383 [2024-07-15 15:56:51.218575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2067361 ] 00:05:15.644 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.644 [2024-07-15 15:56:51.284930] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.644 [2024-07-15 15:56:51.358078] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:15.644 [2024-07-15 15:56:51.358120] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2067361' to capture a snapshot of events at runtime. 00:05:15.644 [2024-07-15 15:56:51.358132] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:15.644 [2024-07-15 15:56:51.358139] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:15.644 [2024-07-15 15:56:51.358145] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2067361 for offline analysis/debug. 00:05:15.644 [2024-07-15 15:56:51.358170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.215 15:56:51 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.215 15:56:51 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:16.216 15:56:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.216 15:56:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.216 15:56:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:16.216 15:56:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:16.216 15:56:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.216 15:56:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.216 15:56:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.216 ************************************ 00:05:16.216 START TEST rpc_integrity 00:05:16.216 ************************************ 00:05:16.216 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:16.216 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:16.216 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.216 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.216 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.216 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:16.216 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:16.477 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:16.477 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:16.477 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.477 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.477 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.477 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:16.477 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:16.477 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.477 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.477 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.477 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:16.477 { 00:05:16.477 "name": "Malloc0", 00:05:16.477 "aliases": [ 00:05:16.477 "4cc67631-30de-4c89-bc5f-5417672f8378" 00:05:16.477 ], 00:05:16.477 "product_name": "Malloc disk", 00:05:16.477 "block_size": 512, 00:05:16.477 "num_blocks": 16384, 00:05:16.477 "uuid": "4cc67631-30de-4c89-bc5f-5417672f8378", 00:05:16.477 "assigned_rate_limits": { 00:05:16.477 "rw_ios_per_sec": 0, 00:05:16.477 "rw_mbytes_per_sec": 0, 00:05:16.477 "r_mbytes_per_sec": 0, 00:05:16.477 "w_mbytes_per_sec": 0 00:05:16.477 }, 00:05:16.477 "claimed": false, 00:05:16.477 "zoned": false, 00:05:16.477 "supported_io_types": { 00:05:16.477 "read": true, 00:05:16.477 "write": true, 00:05:16.477 "unmap": true, 00:05:16.477 "flush": true, 00:05:16.477 "reset": true, 00:05:16.477 "nvme_admin": false, 00:05:16.477 "nvme_io": false, 00:05:16.477 "nvme_io_md": false, 00:05:16.477 "write_zeroes": true, 00:05:16.477 "zcopy": true, 00:05:16.477 "get_zone_info": false, 00:05:16.477 "zone_management": false, 00:05:16.477 "zone_append": false, 00:05:16.477 "compare": false, 00:05:16.477 "compare_and_write": false, 00:05:16.477 "abort": true, 00:05:16.477 "seek_hole": false, 00:05:16.477 "seek_data": false, 00:05:16.477 "copy": true, 00:05:16.477 "nvme_iov_md": false 00:05:16.477 }, 00:05:16.477 "memory_domains": [ 00:05:16.477 { 00:05:16.477 "dma_device_id": "system", 00:05:16.477 "dma_device_type": 1 00:05:16.477 }, 00:05:16.477 { 00:05:16.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.477 "dma_device_type": 2 00:05:16.477 } 00:05:16.477 ], 00:05:16.477 "driver_specific": {} 00:05:16.477 } 00:05:16.477 ]' 00:05:16.477 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:16.477 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:16.477 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:16.477 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.477 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.477 [2024-07-15 15:56:52.170363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:16.477 [2024-07-15 15:56:52.170396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:16.477 [2024-07-15 15:56:52.170408] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1724d80 00:05:16.477 [2024-07-15 15:56:52.170415] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:16.477 [2024-07-15 15:56:52.171768] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:16.477 [2024-07-15 15:56:52.171790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:16.477 Passthru0 00:05:16.477 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.477 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:16.477 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.477 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.477 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.477 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:16.477 { 00:05:16.477 "name": "Malloc0", 00:05:16.477 "aliases": [ 00:05:16.477 "4cc67631-30de-4c89-bc5f-5417672f8378" 00:05:16.477 ], 00:05:16.477 "product_name": "Malloc disk", 00:05:16.477 "block_size": 512, 00:05:16.477 "num_blocks": 16384, 00:05:16.477 "uuid": "4cc67631-30de-4c89-bc5f-5417672f8378", 00:05:16.477 "assigned_rate_limits": { 00:05:16.477 "rw_ios_per_sec": 0, 00:05:16.477 "rw_mbytes_per_sec": 0, 00:05:16.477 "r_mbytes_per_sec": 0, 00:05:16.477 "w_mbytes_per_sec": 0 00:05:16.477 }, 00:05:16.477 "claimed": true, 00:05:16.477 "claim_type": "exclusive_write", 00:05:16.477 "zoned": false, 00:05:16.477 "supported_io_types": { 00:05:16.477 "read": true, 00:05:16.477 "write": true, 00:05:16.477 "unmap": true, 00:05:16.477 "flush": true, 00:05:16.477 "reset": true, 00:05:16.477 "nvme_admin": false, 00:05:16.477 "nvme_io": false, 00:05:16.477 "nvme_io_md": false, 00:05:16.477 "write_zeroes": true, 00:05:16.477 "zcopy": true, 00:05:16.477 "get_zone_info": false, 00:05:16.477 "zone_management": false, 00:05:16.477 "zone_append": false, 00:05:16.477 "compare": false, 00:05:16.477 "compare_and_write": false, 00:05:16.477 "abort": true, 00:05:16.477 "seek_hole": false, 00:05:16.477 "seek_data": false, 00:05:16.477 "copy": true, 00:05:16.477 "nvme_iov_md": false 00:05:16.477 }, 00:05:16.477 "memory_domains": [ 00:05:16.477 { 00:05:16.477 "dma_device_id": "system", 00:05:16.477 "dma_device_type": 1 00:05:16.477 }, 00:05:16.477 { 00:05:16.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.477 "dma_device_type": 2 00:05:16.477 } 00:05:16.477 ], 00:05:16.477 "driver_specific": {} 00:05:16.477 }, 00:05:16.477 { 00:05:16.477 "name": "Passthru0", 00:05:16.477 "aliases": [ 00:05:16.477 "046ae9c3-d445-5a80-8c59-9d5a9515e9e4" 00:05:16.477 ], 00:05:16.477 "product_name": "passthru", 00:05:16.477 "block_size": 512, 00:05:16.477 "num_blocks": 16384, 00:05:16.477 "uuid": "046ae9c3-d445-5a80-8c59-9d5a9515e9e4", 00:05:16.477 "assigned_rate_limits": { 00:05:16.477 "rw_ios_per_sec": 0, 00:05:16.477 "rw_mbytes_per_sec": 0, 00:05:16.477 "r_mbytes_per_sec": 0, 00:05:16.477 "w_mbytes_per_sec": 0 00:05:16.477 }, 00:05:16.477 "claimed": false, 00:05:16.477 "zoned": false, 00:05:16.477 "supported_io_types": { 00:05:16.477 "read": true, 00:05:16.477 "write": true, 00:05:16.477 "unmap": true, 00:05:16.477 "flush": true, 00:05:16.477 "reset": true, 00:05:16.477 "nvme_admin": false, 00:05:16.477 "nvme_io": false, 00:05:16.477 "nvme_io_md": false, 00:05:16.477 "write_zeroes": true, 00:05:16.477 "zcopy": true, 00:05:16.477 "get_zone_info": false, 00:05:16.477 "zone_management": false, 00:05:16.477 "zone_append": false, 00:05:16.477 "compare": false, 00:05:16.477 "compare_and_write": false, 00:05:16.477 "abort": true, 00:05:16.477 "seek_hole": false, 00:05:16.477 "seek_data": false, 00:05:16.477 "copy": true, 00:05:16.477 "nvme_iov_md": false 00:05:16.477 }, 00:05:16.477 "memory_domains": [ 00:05:16.477 { 00:05:16.477 "dma_device_id": "system", 00:05:16.477 "dma_device_type": 1 00:05:16.477 }, 00:05:16.477 { 00:05:16.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.477 "dma_device_type": 2 00:05:16.477 } 00:05:16.477 ], 00:05:16.477 "driver_specific": { 00:05:16.477 "passthru": { 00:05:16.477 "name": "Passthru0", 00:05:16.477 "base_bdev_name": "Malloc0" 00:05:16.477 } 00:05:16.477 } 00:05:16.477 } 00:05:16.477 ]' 00:05:16.477 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:16.477 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:16.477 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:16.477 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.477 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.477 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.477 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:16.477 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.478 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.478 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.478 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:16.478 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.478 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.478 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.478 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:16.478 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:16.738 15:56:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:16.738 00:05:16.738 real 0m0.298s 00:05:16.738 user 0m0.195s 00:05:16.738 sys 0m0.037s 00:05:16.738 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.738 15:56:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.738 ************************************ 00:05:16.738 END TEST rpc_integrity 00:05:16.738 ************************************ 00:05:16.738 15:56:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.738 15:56:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:16.738 15:56:52 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.738 15:56:52 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.738 15:56:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.738 ************************************ 00:05:16.738 START TEST rpc_plugins 00:05:16.738 ************************************ 00:05:16.738 15:56:52 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:16.738 15:56:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:16.738 15:56:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.738 15:56:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.738 15:56:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.738 15:56:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:16.738 15:56:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:16.738 15:56:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.738 15:56:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.738 15:56:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.738 15:56:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:16.738 { 00:05:16.738 "name": "Malloc1", 00:05:16.738 "aliases": [ 00:05:16.739 "a3167c00-433e-4cf5-a645-920ecceee70a" 00:05:16.739 ], 00:05:16.739 "product_name": "Malloc disk", 00:05:16.739 "block_size": 4096, 00:05:16.739 "num_blocks": 256, 00:05:16.739 "uuid": "a3167c00-433e-4cf5-a645-920ecceee70a", 00:05:16.739 "assigned_rate_limits": { 00:05:16.739 "rw_ios_per_sec": 0, 00:05:16.739 "rw_mbytes_per_sec": 0, 00:05:16.739 "r_mbytes_per_sec": 0, 00:05:16.739 "w_mbytes_per_sec": 0 00:05:16.739 }, 00:05:16.739 "claimed": false, 00:05:16.739 "zoned": false, 00:05:16.739 "supported_io_types": { 00:05:16.739 "read": true, 00:05:16.739 "write": true, 00:05:16.739 "unmap": true, 00:05:16.739 "flush": true, 00:05:16.739 "reset": true, 00:05:16.739 "nvme_admin": false, 00:05:16.739 "nvme_io": false, 00:05:16.739 "nvme_io_md": false, 00:05:16.739 "write_zeroes": true, 00:05:16.739 "zcopy": true, 00:05:16.739 "get_zone_info": false, 00:05:16.739 "zone_management": false, 00:05:16.739 "zone_append": false, 00:05:16.739 "compare": false, 00:05:16.739 "compare_and_write": false, 00:05:16.739 "abort": true, 00:05:16.739 "seek_hole": false, 00:05:16.739 "seek_data": false, 00:05:16.739 "copy": true, 00:05:16.739 "nvme_iov_md": false 00:05:16.739 }, 00:05:16.739 "memory_domains": [ 00:05:16.739 { 00:05:16.739 "dma_device_id": "system", 00:05:16.739 "dma_device_type": 1 00:05:16.739 }, 00:05:16.739 { 00:05:16.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.739 "dma_device_type": 2 00:05:16.739 } 00:05:16.739 ], 00:05:16.739 "driver_specific": {} 00:05:16.739 } 00:05:16.739 ]' 00:05:16.739 15:56:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:16.739 15:56:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:16.739 15:56:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:16.739 15:56:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.739 15:56:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.739 15:56:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.739 15:56:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:16.739 15:56:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.739 15:56:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.739 15:56:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.739 15:56:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:16.739 15:56:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:16.739 15:56:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:16.739 00:05:16.739 real 0m0.147s 00:05:16.739 user 0m0.094s 00:05:16.739 sys 0m0.020s 00:05:16.739 15:56:52 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.739 15:56:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.739 ************************************ 00:05:16.739 END TEST rpc_plugins 00:05:16.739 ************************************ 00:05:16.999 15:56:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.999 15:56:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:16.999 15:56:52 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.999 15:56:52 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.999 15:56:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.999 ************************************ 00:05:16.999 START TEST rpc_trace_cmd_test 00:05:16.999 ************************************ 00:05:16.999 15:56:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:16.999 15:56:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:16.999 15:56:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:16.999 15:56:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.999 15:56:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:16.999 15:56:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.999 15:56:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:16.999 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2067361", 00:05:16.999 "tpoint_group_mask": "0x8", 00:05:16.999 "iscsi_conn": { 00:05:16.999 "mask": "0x2", 00:05:16.999 "tpoint_mask": "0x0" 00:05:16.999 }, 00:05:16.999 "scsi": { 00:05:16.999 "mask": "0x4", 00:05:16.999 "tpoint_mask": "0x0" 00:05:16.999 }, 00:05:16.999 "bdev": { 00:05:16.999 "mask": "0x8", 00:05:16.999 "tpoint_mask": "0xffffffffffffffff" 00:05:16.999 }, 00:05:16.999 "nvmf_rdma": { 00:05:16.999 "mask": "0x10", 00:05:16.999 "tpoint_mask": "0x0" 00:05:16.999 }, 00:05:16.999 "nvmf_tcp": { 00:05:16.999 "mask": "0x20", 00:05:16.999 "tpoint_mask": "0x0" 00:05:16.999 }, 00:05:16.999 "ftl": { 00:05:16.999 "mask": "0x40", 00:05:16.999 "tpoint_mask": "0x0" 00:05:16.999 }, 00:05:16.999 "blobfs": { 00:05:16.999 "mask": "0x80", 00:05:16.999 "tpoint_mask": "0x0" 00:05:16.999 }, 00:05:16.999 "dsa": { 00:05:16.999 "mask": "0x200", 00:05:16.999 "tpoint_mask": "0x0" 00:05:16.999 }, 00:05:16.999 "thread": { 00:05:16.999 "mask": "0x400", 00:05:16.999 "tpoint_mask": "0x0" 00:05:16.999 }, 00:05:16.999 "nvme_pcie": { 00:05:16.999 "mask": "0x800", 00:05:16.999 "tpoint_mask": "0x0" 00:05:16.999 }, 00:05:16.999 "iaa": { 00:05:16.999 "mask": "0x1000", 00:05:16.999 "tpoint_mask": "0x0" 00:05:16.999 }, 00:05:16.999 "nvme_tcp": { 00:05:16.999 "mask": "0x2000", 00:05:16.999 "tpoint_mask": "0x0" 00:05:16.999 }, 00:05:16.999 "bdev_nvme": { 00:05:16.999 "mask": "0x4000", 00:05:16.999 "tpoint_mask": "0x0" 00:05:16.999 }, 00:05:16.999 "sock": { 00:05:16.999 "mask": "0x8000", 00:05:16.999 "tpoint_mask": "0x0" 00:05:16.999 } 00:05:16.999 }' 00:05:16.999 15:56:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:16.999 15:56:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:16.999 15:56:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:16.999 15:56:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:16.999 15:56:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:16.999 15:56:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:16.999 15:56:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:16.999 15:56:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:16.999 15:56:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:17.260 15:56:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:17.260 00:05:17.260 real 0m0.251s 00:05:17.260 user 0m0.213s 00:05:17.260 sys 0m0.029s 00:05:17.260 15:56:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.260 15:56:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:17.260 ************************************ 00:05:17.260 END TEST rpc_trace_cmd_test 00:05:17.260 ************************************ 00:05:17.260 15:56:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.260 15:56:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:17.260 15:56:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:17.260 15:56:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:17.260 15:56:52 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.260 15:56:52 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.260 15:56:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.260 ************************************ 00:05:17.260 START TEST rpc_daemon_integrity 00:05:17.260 ************************************ 00:05:17.260 15:56:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:17.260 15:56:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.260 15:56:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.260 15:56:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.260 15:56:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.260 15:56:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.260 15:56:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:17.260 { 00:05:17.260 "name": "Malloc2", 00:05:17.260 "aliases": [ 00:05:17.260 "db08e1be-9798-4464-a0fe-00ad5c9e46bd" 00:05:17.260 ], 00:05:17.260 "product_name": "Malloc disk", 00:05:17.260 "block_size": 512, 00:05:17.260 "num_blocks": 16384, 00:05:17.260 "uuid": "db08e1be-9798-4464-a0fe-00ad5c9e46bd", 00:05:17.260 "assigned_rate_limits": { 00:05:17.260 "rw_ios_per_sec": 0, 00:05:17.260 "rw_mbytes_per_sec": 0, 00:05:17.260 "r_mbytes_per_sec": 0, 00:05:17.260 "w_mbytes_per_sec": 0 00:05:17.260 }, 00:05:17.260 "claimed": false, 00:05:17.260 "zoned": false, 00:05:17.260 "supported_io_types": { 00:05:17.260 "read": true, 00:05:17.260 "write": true, 00:05:17.260 "unmap": true, 00:05:17.260 "flush": true, 00:05:17.260 "reset": true, 00:05:17.260 "nvme_admin": false, 00:05:17.260 "nvme_io": false, 00:05:17.260 "nvme_io_md": false, 00:05:17.260 "write_zeroes": true, 00:05:17.260 "zcopy": true, 00:05:17.260 "get_zone_info": false, 00:05:17.260 "zone_management": false, 00:05:17.260 "zone_append": false, 00:05:17.260 "compare": false, 00:05:17.260 "compare_and_write": false, 00:05:17.260 "abort": true, 00:05:17.260 "seek_hole": false, 00:05:17.260 "seek_data": false, 00:05:17.260 "copy": true, 00:05:17.260 "nvme_iov_md": false 00:05:17.260 }, 00:05:17.260 "memory_domains": [ 00:05:17.260 { 00:05:17.260 "dma_device_id": "system", 00:05:17.260 "dma_device_type": 1 00:05:17.260 }, 00:05:17.260 { 00:05:17.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.260 "dma_device_type": 2 00:05:17.260 } 00:05:17.260 ], 00:05:17.260 "driver_specific": {} 00:05:17.260 } 00:05:17.260 ]' 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.260 [2024-07-15 15:56:53.088853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:17.260 [2024-07-15 15:56:53.088884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.260 [2024-07-15 15:56:53.088896] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1725a90 00:05:17.260 [2024-07-15 15:56:53.088902] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.260 [2024-07-15 15:56:53.090119] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.260 [2024-07-15 15:56:53.090145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.260 Passthru0 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.260 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:17.521 { 00:05:17.521 "name": "Malloc2", 00:05:17.521 "aliases": [ 00:05:17.521 "db08e1be-9798-4464-a0fe-00ad5c9e46bd" 00:05:17.521 ], 00:05:17.521 "product_name": "Malloc disk", 00:05:17.521 "block_size": 512, 00:05:17.521 "num_blocks": 16384, 00:05:17.521 "uuid": "db08e1be-9798-4464-a0fe-00ad5c9e46bd", 00:05:17.521 "assigned_rate_limits": { 00:05:17.521 "rw_ios_per_sec": 0, 00:05:17.521 "rw_mbytes_per_sec": 0, 00:05:17.521 "r_mbytes_per_sec": 0, 00:05:17.521 "w_mbytes_per_sec": 0 00:05:17.521 }, 00:05:17.521 "claimed": true, 00:05:17.521 "claim_type": "exclusive_write", 00:05:17.521 "zoned": false, 00:05:17.521 "supported_io_types": { 00:05:17.521 "read": true, 00:05:17.521 "write": true, 00:05:17.521 "unmap": true, 00:05:17.521 "flush": true, 00:05:17.521 "reset": true, 00:05:17.521 "nvme_admin": false, 00:05:17.521 "nvme_io": false, 00:05:17.521 "nvme_io_md": false, 00:05:17.521 "write_zeroes": true, 00:05:17.521 "zcopy": true, 00:05:17.521 "get_zone_info": false, 00:05:17.521 "zone_management": false, 00:05:17.521 "zone_append": false, 00:05:17.521 "compare": false, 00:05:17.521 "compare_and_write": false, 00:05:17.521 "abort": true, 00:05:17.521 "seek_hole": false, 00:05:17.521 "seek_data": false, 00:05:17.521 "copy": true, 00:05:17.521 "nvme_iov_md": false 00:05:17.521 }, 00:05:17.521 "memory_domains": [ 00:05:17.521 { 00:05:17.521 "dma_device_id": "system", 00:05:17.521 "dma_device_type": 1 00:05:17.521 }, 00:05:17.521 { 00:05:17.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.521 "dma_device_type": 2 00:05:17.521 } 00:05:17.521 ], 00:05:17.521 "driver_specific": {} 00:05:17.521 }, 00:05:17.521 { 00:05:17.521 "name": "Passthru0", 00:05:17.521 "aliases": [ 00:05:17.521 "7a291a38-4a81-5874-89dc-ac96e5d03dbd" 00:05:17.521 ], 00:05:17.521 "product_name": "passthru", 00:05:17.521 "block_size": 512, 00:05:17.521 "num_blocks": 16384, 00:05:17.521 "uuid": "7a291a38-4a81-5874-89dc-ac96e5d03dbd", 00:05:17.521 "assigned_rate_limits": { 00:05:17.521 "rw_ios_per_sec": 0, 00:05:17.521 "rw_mbytes_per_sec": 0, 00:05:17.521 "r_mbytes_per_sec": 0, 00:05:17.521 "w_mbytes_per_sec": 0 00:05:17.521 }, 00:05:17.521 "claimed": false, 00:05:17.521 "zoned": false, 00:05:17.521 "supported_io_types": { 00:05:17.521 "read": true, 00:05:17.521 "write": true, 00:05:17.521 "unmap": true, 00:05:17.521 "flush": true, 00:05:17.521 "reset": true, 00:05:17.521 "nvme_admin": false, 00:05:17.521 "nvme_io": false, 00:05:17.521 "nvme_io_md": false, 00:05:17.521 "write_zeroes": true, 00:05:17.521 "zcopy": true, 00:05:17.521 "get_zone_info": false, 00:05:17.521 "zone_management": false, 00:05:17.521 "zone_append": false, 00:05:17.521 "compare": false, 00:05:17.521 "compare_and_write": false, 00:05:17.521 "abort": true, 00:05:17.521 "seek_hole": false, 00:05:17.521 "seek_data": false, 00:05:17.521 "copy": true, 00:05:17.521 "nvme_iov_md": false 00:05:17.521 }, 00:05:17.521 "memory_domains": [ 00:05:17.521 { 00:05:17.521 "dma_device_id": "system", 00:05:17.521 "dma_device_type": 1 00:05:17.521 }, 00:05:17.521 { 00:05:17.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.521 "dma_device_type": 2 00:05:17.521 } 00:05:17.521 ], 00:05:17.521 "driver_specific": { 00:05:17.521 "passthru": { 00:05:17.521 "name": "Passthru0", 00:05:17.521 "base_bdev_name": "Malloc2" 00:05:17.521 } 00:05:17.521 } 00:05:17.521 } 00:05:17.521 ]' 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.521 00:05:17.521 real 0m0.299s 00:05:17.521 user 0m0.182s 00:05:17.521 sys 0m0.049s 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.521 15:56:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.521 ************************************ 00:05:17.521 END TEST rpc_daemon_integrity 00:05:17.521 ************************************ 00:05:17.521 15:56:53 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.521 15:56:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:17.521 15:56:53 rpc -- rpc/rpc.sh@84 -- # killprocess 2067361 00:05:17.521 15:56:53 rpc -- common/autotest_common.sh@948 -- # '[' -z 2067361 ']' 00:05:17.521 15:56:53 rpc -- common/autotest_common.sh@952 -- # kill -0 2067361 00:05:17.521 15:56:53 rpc -- common/autotest_common.sh@953 -- # uname 00:05:17.521 15:56:53 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.521 15:56:53 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2067361 00:05:17.521 15:56:53 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.521 15:56:53 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.521 15:56:53 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2067361' 00:05:17.521 killing process with pid 2067361 00:05:17.521 15:56:53 rpc -- common/autotest_common.sh@967 -- # kill 2067361 00:05:17.521 15:56:53 rpc -- common/autotest_common.sh@972 -- # wait 2067361 00:05:17.781 00:05:17.781 real 0m2.495s 00:05:17.781 user 0m3.288s 00:05:17.781 sys 0m0.706s 00:05:17.781 15:56:53 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.781 15:56:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.781 ************************************ 00:05:17.781 END TEST rpc 00:05:17.781 ************************************ 00:05:17.781 15:56:53 -- common/autotest_common.sh@1142 -- # return 0 00:05:17.781 15:56:53 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:17.781 15:56:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.781 15:56:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.781 15:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:17.781 ************************************ 00:05:17.781 START TEST skip_rpc 00:05:17.781 ************************************ 00:05:17.781 15:56:53 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:18.042 * Looking for test storage... 00:05:18.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:18.042 15:56:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:18.042 15:56:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:18.042 15:56:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:18.042 15:56:53 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.042 15:56:53 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.042 15:56:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.042 ************************************ 00:05:18.042 START TEST skip_rpc 00:05:18.042 ************************************ 00:05:18.042 15:56:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:18.042 15:56:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2067900 00:05:18.042 15:56:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.042 15:56:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:18.042 15:56:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:18.042 [2024-07-15 15:56:53.821652] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:05:18.042 [2024-07-15 15:56:53.821714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2067900 ] 00:05:18.042 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.303 [2024-07-15 15:56:53.887315] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.303 [2024-07-15 15:56:53.961765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2067900 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2067900 ']' 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2067900 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2067900 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2067900' 00:05:23.649 killing process with pid 2067900 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2067900 00:05:23.649 15:56:58 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2067900 00:05:23.649 00:05:23.649 real 0m5.280s 00:05:23.649 user 0m5.074s 00:05:23.649 sys 0m0.244s 00:05:23.649 15:56:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.649 15:56:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.649 ************************************ 00:05:23.649 END TEST skip_rpc 00:05:23.649 ************************************ 00:05:23.649 15:56:59 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:23.649 15:56:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:23.649 15:56:59 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.649 15:56:59 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.649 15:56:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.649 ************************************ 00:05:23.649 START TEST skip_rpc_with_json 00:05:23.649 ************************************ 00:05:23.649 15:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:23.649 15:56:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:23.649 15:56:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2069095 00:05:23.649 15:56:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.649 15:56:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2069095 00:05:23.649 15:56:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.649 15:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2069095 ']' 00:05:23.649 15:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.649 15:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.649 15:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.649 15:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.649 15:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.649 [2024-07-15 15:56:59.176973] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:05:23.649 [2024-07-15 15:56:59.177034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2069095 ] 00:05:23.649 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.649 [2024-07-15 15:56:59.239642] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.649 [2024-07-15 15:56:59.310040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.251 15:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.251 15:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:24.251 15:56:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:24.251 15:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.251 15:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.251 [2024-07-15 15:56:59.956667] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:24.251 request: 00:05:24.251 { 00:05:24.251 "trtype": "tcp", 00:05:24.251 "method": "nvmf_get_transports", 00:05:24.251 "req_id": 1 00:05:24.251 } 00:05:24.251 Got JSON-RPC error response 00:05:24.251 response: 00:05:24.251 { 00:05:24.251 "code": -19, 00:05:24.251 "message": "No such device" 00:05:24.251 } 00:05:24.251 15:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:24.251 15:56:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:24.251 15:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.251 15:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.251 [2024-07-15 15:56:59.968788] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.251 15:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.251 15:56:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:24.251 15:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.251 15:56:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.511 15:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.511 15:57:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:24.511 { 00:05:24.511 "subsystems": [ 00:05:24.511 { 00:05:24.511 "subsystem": "vfio_user_target", 00:05:24.511 "config": null 00:05:24.511 }, 00:05:24.511 { 00:05:24.511 "subsystem": "keyring", 00:05:24.511 "config": [] 00:05:24.511 }, 00:05:24.511 { 00:05:24.511 "subsystem": "iobuf", 00:05:24.511 "config": [ 00:05:24.511 { 00:05:24.511 "method": "iobuf_set_options", 00:05:24.511 "params": { 00:05:24.511 "small_pool_count": 8192, 00:05:24.511 "large_pool_count": 1024, 00:05:24.511 "small_bufsize": 8192, 00:05:24.511 "large_bufsize": 135168 00:05:24.511 } 00:05:24.511 } 00:05:24.511 ] 00:05:24.511 }, 00:05:24.511 { 00:05:24.511 "subsystem": "sock", 00:05:24.511 "config": [ 00:05:24.511 { 00:05:24.511 "method": "sock_set_default_impl", 00:05:24.511 "params": { 00:05:24.511 "impl_name": "posix" 00:05:24.511 } 00:05:24.511 }, 00:05:24.511 { 00:05:24.511 "method": "sock_impl_set_options", 00:05:24.511 "params": { 00:05:24.511 "impl_name": "ssl", 00:05:24.511 "recv_buf_size": 4096, 00:05:24.511 "send_buf_size": 4096, 00:05:24.511 "enable_recv_pipe": true, 00:05:24.511 "enable_quickack": false, 00:05:24.511 "enable_placement_id": 0, 00:05:24.511 "enable_zerocopy_send_server": true, 00:05:24.511 "enable_zerocopy_send_client": false, 00:05:24.511 "zerocopy_threshold": 0, 00:05:24.511 "tls_version": 0, 00:05:24.511 "enable_ktls": false 00:05:24.511 } 00:05:24.511 }, 00:05:24.511 { 00:05:24.511 "method": "sock_impl_set_options", 00:05:24.511 "params": { 00:05:24.511 "impl_name": "posix", 00:05:24.511 "recv_buf_size": 2097152, 00:05:24.511 "send_buf_size": 2097152, 00:05:24.511 "enable_recv_pipe": true, 00:05:24.511 "enable_quickack": false, 00:05:24.511 "enable_placement_id": 0, 00:05:24.511 "enable_zerocopy_send_server": true, 00:05:24.511 "enable_zerocopy_send_client": false, 00:05:24.511 "zerocopy_threshold": 0, 00:05:24.511 "tls_version": 0, 00:05:24.511 "enable_ktls": false 00:05:24.511 } 00:05:24.511 } 00:05:24.511 ] 00:05:24.511 }, 00:05:24.511 { 00:05:24.511 "subsystem": "vmd", 00:05:24.511 "config": [] 00:05:24.511 }, 00:05:24.511 { 00:05:24.511 "subsystem": "accel", 00:05:24.511 "config": [ 00:05:24.511 { 00:05:24.511 "method": "accel_set_options", 00:05:24.511 "params": { 00:05:24.511 "small_cache_size": 128, 00:05:24.511 "large_cache_size": 16, 00:05:24.511 "task_count": 2048, 00:05:24.511 "sequence_count": 2048, 00:05:24.511 "buf_count": 2048 00:05:24.511 } 00:05:24.511 } 00:05:24.511 ] 00:05:24.511 }, 00:05:24.511 { 00:05:24.511 "subsystem": "bdev", 00:05:24.511 "config": [ 00:05:24.511 { 00:05:24.511 "method": "bdev_set_options", 00:05:24.511 "params": { 00:05:24.511 "bdev_io_pool_size": 65535, 00:05:24.511 "bdev_io_cache_size": 256, 00:05:24.511 "bdev_auto_examine": true, 00:05:24.511 "iobuf_small_cache_size": 128, 00:05:24.511 "iobuf_large_cache_size": 16 00:05:24.511 } 00:05:24.511 }, 00:05:24.511 { 00:05:24.511 "method": "bdev_raid_set_options", 00:05:24.511 "params": { 00:05:24.511 "process_window_size_kb": 1024 00:05:24.511 } 00:05:24.511 }, 00:05:24.511 { 00:05:24.511 "method": "bdev_iscsi_set_options", 00:05:24.511 "params": { 00:05:24.511 "timeout_sec": 30 00:05:24.511 } 00:05:24.511 }, 00:05:24.511 { 00:05:24.511 "method": "bdev_nvme_set_options", 00:05:24.511 "params": { 00:05:24.511 "action_on_timeout": "none", 00:05:24.511 "timeout_us": 0, 00:05:24.511 "timeout_admin_us": 0, 00:05:24.511 "keep_alive_timeout_ms": 10000, 00:05:24.511 "arbitration_burst": 0, 00:05:24.511 "low_priority_weight": 0, 00:05:24.511 "medium_priority_weight": 0, 00:05:24.512 "high_priority_weight": 0, 00:05:24.512 "nvme_adminq_poll_period_us": 10000, 00:05:24.512 "nvme_ioq_poll_period_us": 0, 00:05:24.512 "io_queue_requests": 0, 00:05:24.512 "delay_cmd_submit": true, 00:05:24.512 "transport_retry_count": 4, 00:05:24.512 "bdev_retry_count": 3, 00:05:24.512 "transport_ack_timeout": 0, 00:05:24.512 "ctrlr_loss_timeout_sec": 0, 00:05:24.512 "reconnect_delay_sec": 0, 00:05:24.512 "fast_io_fail_timeout_sec": 0, 00:05:24.512 "disable_auto_failback": false, 00:05:24.512 "generate_uuids": false, 00:05:24.512 "transport_tos": 0, 00:05:24.512 "nvme_error_stat": false, 00:05:24.512 "rdma_srq_size": 0, 00:05:24.512 "io_path_stat": false, 00:05:24.512 "allow_accel_sequence": false, 00:05:24.512 "rdma_max_cq_size": 0, 00:05:24.512 "rdma_cm_event_timeout_ms": 0, 00:05:24.512 "dhchap_digests": [ 00:05:24.512 "sha256", 00:05:24.512 "sha384", 00:05:24.512 "sha512" 00:05:24.512 ], 00:05:24.512 "dhchap_dhgroups": [ 00:05:24.512 "null", 00:05:24.512 "ffdhe2048", 00:05:24.512 "ffdhe3072", 00:05:24.512 "ffdhe4096", 00:05:24.512 "ffdhe6144", 00:05:24.512 "ffdhe8192" 00:05:24.512 ] 00:05:24.512 } 00:05:24.512 }, 00:05:24.512 { 00:05:24.512 "method": "bdev_nvme_set_hotplug", 00:05:24.512 "params": { 00:05:24.512 "period_us": 100000, 00:05:24.512 "enable": false 00:05:24.512 } 00:05:24.512 }, 00:05:24.512 { 00:05:24.512 "method": "bdev_wait_for_examine" 00:05:24.512 } 00:05:24.512 ] 00:05:24.512 }, 00:05:24.512 { 00:05:24.512 "subsystem": "scsi", 00:05:24.512 "config": null 00:05:24.512 }, 00:05:24.512 { 00:05:24.512 "subsystem": "scheduler", 00:05:24.512 "config": [ 00:05:24.512 { 00:05:24.512 "method": "framework_set_scheduler", 00:05:24.512 "params": { 00:05:24.512 "name": "static" 00:05:24.512 } 00:05:24.512 } 00:05:24.512 ] 00:05:24.512 }, 00:05:24.512 { 00:05:24.512 "subsystem": "vhost_scsi", 00:05:24.512 "config": [] 00:05:24.512 }, 00:05:24.512 { 00:05:24.512 "subsystem": "vhost_blk", 00:05:24.512 "config": [] 00:05:24.512 }, 00:05:24.512 { 00:05:24.512 "subsystem": "ublk", 00:05:24.512 "config": [] 00:05:24.512 }, 00:05:24.512 { 00:05:24.512 "subsystem": "nbd", 00:05:24.512 "config": [] 00:05:24.512 }, 00:05:24.512 { 00:05:24.512 "subsystem": "nvmf", 00:05:24.512 "config": [ 00:05:24.512 { 00:05:24.512 "method": "nvmf_set_config", 00:05:24.512 "params": { 00:05:24.512 "discovery_filter": "match_any", 00:05:24.512 "admin_cmd_passthru": { 00:05:24.512 "identify_ctrlr": false 00:05:24.512 } 00:05:24.512 } 00:05:24.512 }, 00:05:24.512 { 00:05:24.512 "method": "nvmf_set_max_subsystems", 00:05:24.512 "params": { 00:05:24.512 "max_subsystems": 1024 00:05:24.512 } 00:05:24.512 }, 00:05:24.512 { 00:05:24.512 "method": "nvmf_set_crdt", 00:05:24.512 "params": { 00:05:24.512 "crdt1": 0, 00:05:24.512 "crdt2": 0, 00:05:24.512 "crdt3": 0 00:05:24.512 } 00:05:24.512 }, 00:05:24.512 { 00:05:24.512 "method": "nvmf_create_transport", 00:05:24.512 "params": { 00:05:24.512 "trtype": "TCP", 00:05:24.512 "max_queue_depth": 128, 00:05:24.512 "max_io_qpairs_per_ctrlr": 127, 00:05:24.512 "in_capsule_data_size": 4096, 00:05:24.512 "max_io_size": 131072, 00:05:24.512 "io_unit_size": 131072, 00:05:24.512 "max_aq_depth": 128, 00:05:24.512 "num_shared_buffers": 511, 00:05:24.512 "buf_cache_size": 4294967295, 00:05:24.512 "dif_insert_or_strip": false, 00:05:24.512 "zcopy": false, 00:05:24.512 "c2h_success": true, 00:05:24.512 "sock_priority": 0, 00:05:24.512 "abort_timeout_sec": 1, 00:05:24.512 "ack_timeout": 0, 00:05:24.512 "data_wr_pool_size": 0 00:05:24.512 } 00:05:24.512 } 00:05:24.512 ] 00:05:24.512 }, 00:05:24.512 { 00:05:24.512 "subsystem": "iscsi", 00:05:24.512 "config": [ 00:05:24.512 { 00:05:24.512 "method": "iscsi_set_options", 00:05:24.512 "params": { 00:05:24.512 "node_base": "iqn.2016-06.io.spdk", 00:05:24.512 "max_sessions": 128, 00:05:24.512 "max_connections_per_session": 2, 00:05:24.512 "max_queue_depth": 64, 00:05:24.512 "default_time2wait": 2, 00:05:24.512 "default_time2retain": 20, 00:05:24.512 "first_burst_length": 8192, 00:05:24.512 "immediate_data": true, 00:05:24.512 "allow_duplicated_isid": false, 00:05:24.512 "error_recovery_level": 0, 00:05:24.512 "nop_timeout": 60, 00:05:24.512 "nop_in_interval": 30, 00:05:24.512 "disable_chap": false, 00:05:24.512 "require_chap": false, 00:05:24.512 "mutual_chap": false, 00:05:24.512 "chap_group": 0, 00:05:24.512 "max_large_datain_per_connection": 64, 00:05:24.512 "max_r2t_per_connection": 4, 00:05:24.512 "pdu_pool_size": 36864, 00:05:24.512 "immediate_data_pool_size": 16384, 00:05:24.512 "data_out_pool_size": 2048 00:05:24.512 } 00:05:24.512 } 00:05:24.512 ] 00:05:24.512 } 00:05:24.512 ] 00:05:24.512 } 00:05:24.512 15:57:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:24.512 15:57:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2069095 00:05:24.512 15:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2069095 ']' 00:05:24.512 15:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2069095 00:05:24.512 15:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:24.512 15:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.512 15:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2069095 00:05:24.512 15:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.512 15:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.512 15:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2069095' 00:05:24.512 killing process with pid 2069095 00:05:24.512 15:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2069095 00:05:24.512 15:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2069095 00:05:24.771 15:57:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2069297 00:05:24.771 15:57:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:24.771 15:57:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2069297 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2069297 ']' 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2069297 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2069297 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2069297' 00:05:30.070 killing process with pid 2069297 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2069297 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2069297 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:30.070 00:05:30.070 real 0m6.555s 00:05:30.070 user 0m6.454s 00:05:30.070 sys 0m0.510s 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.070 ************************************ 00:05:30.070 END TEST skip_rpc_with_json 00:05:30.070 ************************************ 00:05:30.070 15:57:05 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.070 15:57:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:30.070 15:57:05 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.070 15:57:05 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.070 15:57:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.070 ************************************ 00:05:30.070 START TEST skip_rpc_with_delay 00:05:30.070 ************************************ 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.070 [2024-07-15 15:57:05.815065] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:30.070 [2024-07-15 15:57:05.815173] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:30.070 00:05:30.070 real 0m0.075s 00:05:30.070 user 0m0.045s 00:05:30.070 sys 0m0.030s 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.070 15:57:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:30.070 ************************************ 00:05:30.070 END TEST skip_rpc_with_delay 00:05:30.070 ************************************ 00:05:30.070 15:57:05 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.070 15:57:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:30.070 15:57:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:30.070 15:57:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:30.070 15:57:05 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.070 15:57:05 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.070 15:57:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.332 ************************************ 00:05:30.332 START TEST exit_on_failed_rpc_init 00:05:30.332 ************************************ 00:05:30.332 15:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:30.332 15:57:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2070662 00:05:30.332 15:57:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2070662 00:05:30.332 15:57:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.332 15:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2070662 ']' 00:05:30.332 15:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.332 15:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.332 15:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.332 15:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.332 15:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.332 [2024-07-15 15:57:05.977718] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:05:30.332 [2024-07-15 15:57:05.977781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2070662 ] 00:05:30.332 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.332 [2024-07-15 15:57:06.043310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.332 [2024-07-15 15:57:06.119259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.275 15:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.275 15:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:31.275 15:57:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.275 15:57:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.275 15:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:31.275 15:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.275 15:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.275 15:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.275 15:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.275 15:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.275 15:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.275 15:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.275 15:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.275 15:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:31.275 15:57:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.275 [2024-07-15 15:57:06.812904] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:05:31.275 [2024-07-15 15:57:06.812952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2070765 ] 00:05:31.275 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.275 [2024-07-15 15:57:06.888437] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.275 [2024-07-15 15:57:06.954542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.275 [2024-07-15 15:57:06.954603] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:31.275 [2024-07-15 15:57:06.954612] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:31.275 [2024-07-15 15:57:06.954619] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2070662 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2070662 ']' 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2070662 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2070662 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2070662' 00:05:31.275 killing process with pid 2070662 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2070662 00:05:31.275 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2070662 00:05:31.536 00:05:31.536 real 0m1.364s 00:05:31.536 user 0m1.606s 00:05:31.536 sys 0m0.377s 00:05:31.536 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.536 15:57:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.536 ************************************ 00:05:31.536 END TEST exit_on_failed_rpc_init 00:05:31.536 ************************************ 00:05:31.536 15:57:07 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:31.536 15:57:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:31.536 00:05:31.536 real 0m13.704s 00:05:31.536 user 0m13.350s 00:05:31.536 sys 0m1.445s 00:05:31.536 15:57:07 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.536 15:57:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.536 ************************************ 00:05:31.536 END TEST skip_rpc 00:05:31.536 ************************************ 00:05:31.536 15:57:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.536 15:57:07 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:31.536 15:57:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.536 15:57:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.536 15:57:07 -- common/autotest_common.sh@10 -- # set +x 00:05:31.797 ************************************ 00:05:31.797 START TEST rpc_client 00:05:31.797 ************************************ 00:05:31.797 15:57:07 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:31.797 * Looking for test storage... 00:05:31.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:31.797 15:57:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:31.797 OK 00:05:31.797 15:57:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:31.797 00:05:31.797 real 0m0.133s 00:05:31.797 user 0m0.056s 00:05:31.797 sys 0m0.085s 00:05:31.797 15:57:07 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.797 15:57:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:31.797 ************************************ 00:05:31.797 END TEST rpc_client 00:05:31.797 ************************************ 00:05:31.797 15:57:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.797 15:57:07 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:31.797 15:57:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.797 15:57:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.797 15:57:07 -- common/autotest_common.sh@10 -- # set +x 00:05:31.797 ************************************ 00:05:31.797 START TEST json_config 00:05:31.797 ************************************ 00:05:31.797 15:57:07 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:32.058 15:57:07 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:32.058 15:57:07 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.058 15:57:07 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.058 15:57:07 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.058 15:57:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.058 15:57:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.058 15:57:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.058 15:57:07 json_config -- paths/export.sh@5 -- # export PATH 00:05:32.058 15:57:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@47 -- # : 0 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:32.058 15:57:07 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:32.059 INFO: JSON configuration test init 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:32.059 15:57:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.059 15:57:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:32.059 15:57:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.059 15:57:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.059 15:57:07 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:32.059 15:57:07 json_config -- json_config/common.sh@9 -- # local app=target 00:05:32.059 15:57:07 json_config -- json_config/common.sh@10 -- # shift 00:05:32.059 15:57:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:32.059 15:57:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:32.059 15:57:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:32.059 15:57:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.059 15:57:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.059 15:57:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2071155 00:05:32.059 15:57:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:32.059 Waiting for target to run... 00:05:32.059 15:57:07 json_config -- json_config/common.sh@25 -- # waitforlisten 2071155 /var/tmp/spdk_tgt.sock 00:05:32.059 15:57:07 json_config -- common/autotest_common.sh@829 -- # '[' -z 2071155 ']' 00:05:32.059 15:57:07 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:32.059 15:57:07 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.059 15:57:07 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:32.059 15:57:07 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:32.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:32.059 15:57:07 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.059 15:57:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.059 [2024-07-15 15:57:07.788565] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:05:32.059 [2024-07-15 15:57:07.788641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2071155 ] 00:05:32.059 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.318 [2024-07-15 15:57:08.061868] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.318 [2024-07-15 15:57:08.117034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.890 15:57:08 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.890 15:57:08 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:32.890 15:57:08 json_config -- json_config/common.sh@26 -- # echo '' 00:05:32.890 00:05:32.890 15:57:08 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:32.890 15:57:08 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:32.890 15:57:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.890 15:57:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.890 15:57:08 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:32.890 15:57:08 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:32.890 15:57:08 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.890 15:57:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.890 15:57:08 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:32.890 15:57:08 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:32.890 15:57:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:33.463 15:57:09 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:33.463 15:57:09 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:33.463 15:57:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.463 15:57:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.463 15:57:09 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:33.463 15:57:09 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:33.463 15:57:09 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:33.463 15:57:09 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:33.463 15:57:09 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:33.463 15:57:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:33.463 15:57:09 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:33.463 15:57:09 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:33.463 15:57:09 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:33.463 15:57:09 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:33.463 15:57:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.463 15:57:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.725 15:57:09 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:33.725 15:57:09 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:33.725 15:57:09 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:33.725 15:57:09 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:33.725 15:57:09 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:33.725 15:57:09 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:33.725 15:57:09 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:33.725 15:57:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.725 15:57:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.725 15:57:09 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:33.725 15:57:09 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:33.725 15:57:09 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:33.725 15:57:09 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.725 15:57:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.725 MallocForNvmf0 00:05:33.725 15:57:09 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.725 15:57:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.986 MallocForNvmf1 00:05:33.986 15:57:09 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.986 15:57:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.986 [2024-07-15 15:57:09.779106] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.986 15:57:09 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.986 15:57:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.247 15:57:09 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.247 15:57:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.507 15:57:10 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.507 15:57:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.507 15:57:10 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.507 15:57:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.768 [2024-07-15 15:57:10.409158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:34.768 15:57:10 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:34.768 15:57:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.768 15:57:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.768 15:57:10 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:34.768 15:57:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.768 15:57:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.768 15:57:10 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:34.768 15:57:10 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.768 15:57:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:35.029 MallocBdevForConfigChangeCheck 00:05:35.029 15:57:10 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:35.029 15:57:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.029 15:57:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.029 15:57:10 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:35.029 15:57:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.290 15:57:11 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:35.290 INFO: shutting down applications... 00:05:35.290 15:57:11 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:35.290 15:57:11 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:35.290 15:57:11 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:35.290 15:57:11 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:35.861 Calling clear_iscsi_subsystem 00:05:35.861 Calling clear_nvmf_subsystem 00:05:35.861 Calling clear_nbd_subsystem 00:05:35.861 Calling clear_ublk_subsystem 00:05:35.861 Calling clear_vhost_blk_subsystem 00:05:35.861 Calling clear_vhost_scsi_subsystem 00:05:35.861 Calling clear_bdev_subsystem 00:05:35.861 15:57:11 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:35.861 15:57:11 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:35.861 15:57:11 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:35.861 15:57:11 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.862 15:57:11 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:35.862 15:57:11 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:36.122 15:57:11 json_config -- json_config/json_config.sh@345 -- # break 00:05:36.122 15:57:11 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:36.122 15:57:11 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:36.122 15:57:11 json_config -- json_config/common.sh@31 -- # local app=target 00:05:36.122 15:57:11 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:36.122 15:57:11 json_config -- json_config/common.sh@35 -- # [[ -n 2071155 ]] 00:05:36.122 15:57:11 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2071155 00:05:36.122 15:57:11 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:36.122 15:57:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.122 15:57:11 json_config -- json_config/common.sh@41 -- # kill -0 2071155 00:05:36.122 15:57:11 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.694 15:57:12 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.694 15:57:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.694 15:57:12 json_config -- json_config/common.sh@41 -- # kill -0 2071155 00:05:36.694 15:57:12 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:36.694 15:57:12 json_config -- json_config/common.sh@43 -- # break 00:05:36.694 15:57:12 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:36.694 15:57:12 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:36.694 SPDK target shutdown done 00:05:36.694 15:57:12 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:36.694 INFO: relaunching applications... 00:05:36.694 15:57:12 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.694 15:57:12 json_config -- json_config/common.sh@9 -- # local app=target 00:05:36.694 15:57:12 json_config -- json_config/common.sh@10 -- # shift 00:05:36.694 15:57:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.694 15:57:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.694 15:57:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.694 15:57:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.694 15:57:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.694 15:57:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2072550 00:05:36.694 15:57:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.694 Waiting for target to run... 00:05:36.694 15:57:12 json_config -- json_config/common.sh@25 -- # waitforlisten 2072550 /var/tmp/spdk_tgt.sock 00:05:36.694 15:57:12 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.694 15:57:12 json_config -- common/autotest_common.sh@829 -- # '[' -z 2072550 ']' 00:05:36.694 15:57:12 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.694 15:57:12 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.694 15:57:12 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.694 15:57:12 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.694 15:57:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.694 [2024-07-15 15:57:12.399760] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:05:36.694 [2024-07-15 15:57:12.399829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2072550 ] 00:05:36.694 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.955 [2024-07-15 15:57:12.671168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.955 [2024-07-15 15:57:12.724245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.527 [2024-07-15 15:57:13.217135] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.527 [2024-07-15 15:57:13.249499] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:37.527 15:57:13 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.527 15:57:13 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:37.527 15:57:13 json_config -- json_config/common.sh@26 -- # echo '' 00:05:37.527 00:05:37.527 15:57:13 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:37.527 15:57:13 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:37.527 INFO: Checking if target configuration is the same... 00:05:37.527 15:57:13 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.527 15:57:13 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:37.527 15:57:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.527 + '[' 2 -ne 2 ']' 00:05:37.527 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:37.527 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:37.527 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:37.527 +++ basename /dev/fd/62 00:05:37.527 ++ mktemp /tmp/62.XXX 00:05:37.527 + tmp_file_1=/tmp/62.if6 00:05:37.527 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.527 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:37.527 + tmp_file_2=/tmp/spdk_tgt_config.json.9LJ 00:05:37.527 + ret=0 00:05:37.527 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:37.786 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.046 + diff -u /tmp/62.if6 /tmp/spdk_tgt_config.json.9LJ 00:05:38.046 + echo 'INFO: JSON config files are the same' 00:05:38.046 INFO: JSON config files are the same 00:05:38.046 + rm /tmp/62.if6 /tmp/spdk_tgt_config.json.9LJ 00:05:38.046 + exit 0 00:05:38.046 15:57:13 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:38.046 15:57:13 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:38.046 INFO: changing configuration and checking if this can be detected... 00:05:38.046 15:57:13 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.046 15:57:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.046 15:57:13 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.046 15:57:13 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:38.046 15:57:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.046 + '[' 2 -ne 2 ']' 00:05:38.046 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:38.046 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:38.046 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:38.046 +++ basename /dev/fd/62 00:05:38.046 ++ mktemp /tmp/62.XXX 00:05:38.046 + tmp_file_1=/tmp/62.knS 00:05:38.046 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.046 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.046 + tmp_file_2=/tmp/spdk_tgt_config.json.T2t 00:05:38.046 + ret=0 00:05:38.046 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.307 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.567 + diff -u /tmp/62.knS /tmp/spdk_tgt_config.json.T2t 00:05:38.567 + ret=1 00:05:38.567 + echo '=== Start of file: /tmp/62.knS ===' 00:05:38.567 + cat /tmp/62.knS 00:05:38.567 + echo '=== End of file: /tmp/62.knS ===' 00:05:38.567 + echo '' 00:05:38.567 + echo '=== Start of file: /tmp/spdk_tgt_config.json.T2t ===' 00:05:38.567 + cat /tmp/spdk_tgt_config.json.T2t 00:05:38.567 + echo '=== End of file: /tmp/spdk_tgt_config.json.T2t ===' 00:05:38.567 + echo '' 00:05:38.567 + rm /tmp/62.knS /tmp/spdk_tgt_config.json.T2t 00:05:38.567 + exit 1 00:05:38.567 15:57:14 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:38.567 INFO: configuration change detected. 00:05:38.567 15:57:14 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:38.567 15:57:14 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:38.567 15:57:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.567 15:57:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.567 15:57:14 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:38.567 15:57:14 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:38.567 15:57:14 json_config -- json_config/json_config.sh@317 -- # [[ -n 2072550 ]] 00:05:38.567 15:57:14 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:38.567 15:57:14 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:38.567 15:57:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.567 15:57:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.567 15:57:14 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:38.567 15:57:14 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:38.567 15:57:14 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:38.567 15:57:14 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:38.567 15:57:14 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:38.567 15:57:14 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:38.568 15:57:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.568 15:57:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.568 15:57:14 json_config -- json_config/json_config.sh@323 -- # killprocess 2072550 00:05:38.568 15:57:14 json_config -- common/autotest_common.sh@948 -- # '[' -z 2072550 ']' 00:05:38.568 15:57:14 json_config -- common/autotest_common.sh@952 -- # kill -0 2072550 00:05:38.568 15:57:14 json_config -- common/autotest_common.sh@953 -- # uname 00:05:38.568 15:57:14 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.568 15:57:14 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2072550 00:05:38.568 15:57:14 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.568 15:57:14 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.568 15:57:14 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2072550' 00:05:38.568 killing process with pid 2072550 00:05:38.568 15:57:14 json_config -- common/autotest_common.sh@967 -- # kill 2072550 00:05:38.568 15:57:14 json_config -- common/autotest_common.sh@972 -- # wait 2072550 00:05:38.828 15:57:14 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.828 15:57:14 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:38.828 15:57:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.828 15:57:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.828 15:57:14 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:38.828 15:57:14 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:38.828 INFO: Success 00:05:38.828 00:05:38.828 real 0m6.994s 00:05:38.828 user 0m8.564s 00:05:38.828 sys 0m1.681s 00:05:38.828 15:57:14 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.828 15:57:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.828 ************************************ 00:05:38.828 END TEST json_config 00:05:38.828 ************************************ 00:05:38.828 15:57:14 -- common/autotest_common.sh@1142 -- # return 0 00:05:38.828 15:57:14 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:38.828 15:57:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.828 15:57:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.828 15:57:14 -- common/autotest_common.sh@10 -- # set +x 00:05:39.090 ************************************ 00:05:39.090 START TEST json_config_extra_key 00:05:39.090 ************************************ 00:05:39.090 15:57:14 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:39.090 15:57:14 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:39.090 15:57:14 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:39.090 15:57:14 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.090 15:57:14 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.090 15:57:14 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.090 15:57:14 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.090 15:57:14 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.090 15:57:14 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.090 15:57:14 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.090 15:57:14 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.090 15:57:14 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.091 15:57:14 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.091 15:57:14 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:39.091 15:57:14 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:39.091 15:57:14 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.091 15:57:14 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.091 15:57:14 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:39.091 15:57:14 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:39.091 15:57:14 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:39.091 15:57:14 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.091 15:57:14 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.091 15:57:14 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.091 15:57:14 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.091 15:57:14 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.091 15:57:14 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.091 15:57:14 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:39.091 15:57:14 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.091 15:57:14 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:39.091 15:57:14 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:39.091 15:57:14 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:39.091 15:57:14 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:39.091 15:57:14 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.091 15:57:14 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.091 15:57:14 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:39.091 15:57:14 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:39.091 15:57:14 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:39.091 15:57:14 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:39.091 15:57:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:39.091 15:57:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:39.091 15:57:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:39.091 15:57:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:39.091 15:57:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:39.091 15:57:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:39.091 15:57:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:39.091 15:57:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:39.091 15:57:14 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:39.091 15:57:14 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:39.091 INFO: launching applications... 00:05:39.091 15:57:14 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:39.091 15:57:14 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:39.091 15:57:14 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:39.091 15:57:14 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:39.091 15:57:14 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:39.091 15:57:14 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:39.091 15:57:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.091 15:57:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.091 15:57:14 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2073259 00:05:39.091 15:57:14 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:39.091 Waiting for target to run... 00:05:39.091 15:57:14 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2073259 /var/tmp/spdk_tgt.sock 00:05:39.091 15:57:14 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2073259 ']' 00:05:39.091 15:57:14 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:39.091 15:57:14 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:39.091 15:57:14 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.091 15:57:14 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:39.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:39.091 15:57:14 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.091 15:57:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:39.091 [2024-07-15 15:57:14.847912] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:05:39.091 [2024-07-15 15:57:14.847983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073259 ] 00:05:39.091 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.395 [2024-07-15 15:57:15.212630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.675 [2024-07-15 15:57:15.264753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.936 15:57:15 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.936 15:57:15 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:39.936 15:57:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:39.936 00:05:39.936 15:57:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:39.936 INFO: shutting down applications... 00:05:39.936 15:57:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:39.936 15:57:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:39.936 15:57:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:39.936 15:57:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2073259 ]] 00:05:39.936 15:57:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2073259 00:05:39.936 15:57:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:39.936 15:57:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.936 15:57:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2073259 00:05:39.936 15:57:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.508 15:57:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.509 15:57:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.509 15:57:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2073259 00:05:40.509 15:57:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:40.509 15:57:16 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:40.509 15:57:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:40.509 15:57:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:40.509 SPDK target shutdown done 00:05:40.509 15:57:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:40.509 Success 00:05:40.509 00:05:40.509 real 0m1.447s 00:05:40.509 user 0m1.011s 00:05:40.509 sys 0m0.471s 00:05:40.509 15:57:16 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.509 15:57:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:40.509 ************************************ 00:05:40.509 END TEST json_config_extra_key 00:05:40.509 ************************************ 00:05:40.509 15:57:16 -- common/autotest_common.sh@1142 -- # return 0 00:05:40.509 15:57:16 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.509 15:57:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.509 15:57:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.509 15:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:40.509 ************************************ 00:05:40.509 START TEST alias_rpc 00:05:40.509 ************************************ 00:05:40.509 15:57:16 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.509 * Looking for test storage... 00:05:40.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:40.509 15:57:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:40.509 15:57:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2073646 00:05:40.509 15:57:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2073646 00:05:40.509 15:57:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.509 15:57:16 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2073646 ']' 00:05:40.509 15:57:16 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.509 15:57:16 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.509 15:57:16 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.509 15:57:16 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.509 15:57:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.769 [2024-07-15 15:57:16.371874] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:05:40.769 [2024-07-15 15:57:16.371946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073646 ] 00:05:40.769 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.769 [2024-07-15 15:57:16.435736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.769 [2024-07-15 15:57:16.509977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.340 15:57:17 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.340 15:57:17 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:41.340 15:57:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:41.600 15:57:17 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2073646 00:05:41.600 15:57:17 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2073646 ']' 00:05:41.600 15:57:17 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2073646 00:05:41.600 15:57:17 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:41.600 15:57:17 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.600 15:57:17 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2073646 00:05:41.600 15:57:17 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.600 15:57:17 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.600 15:57:17 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2073646' 00:05:41.600 killing process with pid 2073646 00:05:41.600 15:57:17 alias_rpc -- common/autotest_common.sh@967 -- # kill 2073646 00:05:41.600 15:57:17 alias_rpc -- common/autotest_common.sh@972 -- # wait 2073646 00:05:41.860 00:05:41.860 real 0m1.365s 00:05:41.860 user 0m1.508s 00:05:41.860 sys 0m0.359s 00:05:41.860 15:57:17 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.860 15:57:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.860 ************************************ 00:05:41.860 END TEST alias_rpc 00:05:41.860 ************************************ 00:05:41.860 15:57:17 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.860 15:57:17 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:41.860 15:57:17 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:41.860 15:57:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.860 15:57:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.860 15:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:41.860 ************************************ 00:05:41.860 START TEST spdkcli_tcp 00:05:41.860 ************************************ 00:05:41.860 15:57:17 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:42.120 * Looking for test storage... 00:05:42.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:42.120 15:57:17 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:42.120 15:57:17 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:42.120 15:57:17 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:42.120 15:57:17 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:42.120 15:57:17 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:42.120 15:57:17 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:42.120 15:57:17 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:42.120 15:57:17 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.120 15:57:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.120 15:57:17 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2073957 00:05:42.120 15:57:17 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2073957 00:05:42.120 15:57:17 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:42.120 15:57:17 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2073957 ']' 00:05:42.120 15:57:17 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.120 15:57:17 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.120 15:57:17 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.120 15:57:17 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.120 15:57:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.120 [2024-07-15 15:57:17.818669] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:05:42.120 [2024-07-15 15:57:17.818744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073957 ] 00:05:42.120 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.120 [2024-07-15 15:57:17.885802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.380 [2024-07-15 15:57:17.961765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.380 [2024-07-15 15:57:17.961767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.950 15:57:18 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.950 15:57:18 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:42.950 15:57:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2074045 00:05:42.950 15:57:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:42.950 15:57:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:42.950 [ 00:05:42.950 "bdev_malloc_delete", 00:05:42.950 "bdev_malloc_create", 00:05:42.950 "bdev_null_resize", 00:05:42.950 "bdev_null_delete", 00:05:42.950 "bdev_null_create", 00:05:42.950 "bdev_nvme_cuse_unregister", 00:05:42.950 "bdev_nvme_cuse_register", 00:05:42.950 "bdev_opal_new_user", 00:05:42.950 "bdev_opal_set_lock_state", 00:05:42.950 "bdev_opal_delete", 00:05:42.950 "bdev_opal_get_info", 00:05:42.950 "bdev_opal_create", 00:05:42.950 "bdev_nvme_opal_revert", 00:05:42.950 "bdev_nvme_opal_init", 00:05:42.950 "bdev_nvme_send_cmd", 00:05:42.950 "bdev_nvme_get_path_iostat", 00:05:42.950 "bdev_nvme_get_mdns_discovery_info", 00:05:42.950 "bdev_nvme_stop_mdns_discovery", 00:05:42.950 "bdev_nvme_start_mdns_discovery", 00:05:42.950 "bdev_nvme_set_multipath_policy", 00:05:42.950 "bdev_nvme_set_preferred_path", 00:05:42.950 "bdev_nvme_get_io_paths", 00:05:42.950 "bdev_nvme_remove_error_injection", 00:05:42.950 "bdev_nvme_add_error_injection", 00:05:42.950 "bdev_nvme_get_discovery_info", 00:05:42.950 "bdev_nvme_stop_discovery", 00:05:42.950 "bdev_nvme_start_discovery", 00:05:42.950 "bdev_nvme_get_controller_health_info", 00:05:42.950 "bdev_nvme_disable_controller", 00:05:42.950 "bdev_nvme_enable_controller", 00:05:42.950 "bdev_nvme_reset_controller", 00:05:42.950 "bdev_nvme_get_transport_statistics", 00:05:42.950 "bdev_nvme_apply_firmware", 00:05:42.950 "bdev_nvme_detach_controller", 00:05:42.950 "bdev_nvme_get_controllers", 00:05:42.950 "bdev_nvme_attach_controller", 00:05:42.950 "bdev_nvme_set_hotplug", 00:05:42.950 "bdev_nvme_set_options", 00:05:42.950 "bdev_passthru_delete", 00:05:42.950 "bdev_passthru_create", 00:05:42.950 "bdev_lvol_set_parent_bdev", 00:05:42.950 "bdev_lvol_set_parent", 00:05:42.950 "bdev_lvol_check_shallow_copy", 00:05:42.950 "bdev_lvol_start_shallow_copy", 00:05:42.950 "bdev_lvol_grow_lvstore", 00:05:42.950 "bdev_lvol_get_lvols", 00:05:42.950 "bdev_lvol_get_lvstores", 00:05:42.950 "bdev_lvol_delete", 00:05:42.950 "bdev_lvol_set_read_only", 00:05:42.950 "bdev_lvol_resize", 00:05:42.950 "bdev_lvol_decouple_parent", 00:05:42.950 "bdev_lvol_inflate", 00:05:42.950 "bdev_lvol_rename", 00:05:42.950 "bdev_lvol_clone_bdev", 00:05:42.950 "bdev_lvol_clone", 00:05:42.951 "bdev_lvol_snapshot", 00:05:42.951 "bdev_lvol_create", 00:05:42.951 "bdev_lvol_delete_lvstore", 00:05:42.951 "bdev_lvol_rename_lvstore", 00:05:42.951 "bdev_lvol_create_lvstore", 00:05:42.951 "bdev_raid_set_options", 00:05:42.951 "bdev_raid_remove_base_bdev", 00:05:42.951 "bdev_raid_add_base_bdev", 00:05:42.951 "bdev_raid_delete", 00:05:42.951 "bdev_raid_create", 00:05:42.951 "bdev_raid_get_bdevs", 00:05:42.951 "bdev_error_inject_error", 00:05:42.951 "bdev_error_delete", 00:05:42.951 "bdev_error_create", 00:05:42.951 "bdev_split_delete", 00:05:42.951 "bdev_split_create", 00:05:42.951 "bdev_delay_delete", 00:05:42.951 "bdev_delay_create", 00:05:42.951 "bdev_delay_update_latency", 00:05:42.951 "bdev_zone_block_delete", 00:05:42.951 "bdev_zone_block_create", 00:05:42.951 "blobfs_create", 00:05:42.951 "blobfs_detect", 00:05:42.951 "blobfs_set_cache_size", 00:05:42.951 "bdev_aio_delete", 00:05:42.951 "bdev_aio_rescan", 00:05:42.951 "bdev_aio_create", 00:05:42.951 "bdev_ftl_set_property", 00:05:42.951 "bdev_ftl_get_properties", 00:05:42.951 "bdev_ftl_get_stats", 00:05:42.951 "bdev_ftl_unmap", 00:05:42.951 "bdev_ftl_unload", 00:05:42.951 "bdev_ftl_delete", 00:05:42.951 "bdev_ftl_load", 00:05:42.951 "bdev_ftl_create", 00:05:42.951 "bdev_virtio_attach_controller", 00:05:42.951 "bdev_virtio_scsi_get_devices", 00:05:42.951 "bdev_virtio_detach_controller", 00:05:42.951 "bdev_virtio_blk_set_hotplug", 00:05:42.951 "bdev_iscsi_delete", 00:05:42.951 "bdev_iscsi_create", 00:05:42.951 "bdev_iscsi_set_options", 00:05:42.951 "accel_error_inject_error", 00:05:42.951 "ioat_scan_accel_module", 00:05:42.951 "dsa_scan_accel_module", 00:05:42.951 "iaa_scan_accel_module", 00:05:42.951 "vfu_virtio_create_scsi_endpoint", 00:05:42.951 "vfu_virtio_scsi_remove_target", 00:05:42.951 "vfu_virtio_scsi_add_target", 00:05:42.951 "vfu_virtio_create_blk_endpoint", 00:05:42.951 "vfu_virtio_delete_endpoint", 00:05:42.951 "keyring_file_remove_key", 00:05:42.951 "keyring_file_add_key", 00:05:42.951 "keyring_linux_set_options", 00:05:42.951 "iscsi_get_histogram", 00:05:42.951 "iscsi_enable_histogram", 00:05:42.951 "iscsi_set_options", 00:05:42.951 "iscsi_get_auth_groups", 00:05:42.951 "iscsi_auth_group_remove_secret", 00:05:42.951 "iscsi_auth_group_add_secret", 00:05:42.951 "iscsi_delete_auth_group", 00:05:42.951 "iscsi_create_auth_group", 00:05:42.951 "iscsi_set_discovery_auth", 00:05:42.951 "iscsi_get_options", 00:05:42.951 "iscsi_target_node_request_logout", 00:05:42.951 "iscsi_target_node_set_redirect", 00:05:42.951 "iscsi_target_node_set_auth", 00:05:42.951 "iscsi_target_node_add_lun", 00:05:42.951 "iscsi_get_stats", 00:05:42.951 "iscsi_get_connections", 00:05:42.951 "iscsi_portal_group_set_auth", 00:05:42.951 "iscsi_start_portal_group", 00:05:42.951 "iscsi_delete_portal_group", 00:05:42.951 "iscsi_create_portal_group", 00:05:42.951 "iscsi_get_portal_groups", 00:05:42.951 "iscsi_delete_target_node", 00:05:42.951 "iscsi_target_node_remove_pg_ig_maps", 00:05:42.951 "iscsi_target_node_add_pg_ig_maps", 00:05:42.951 "iscsi_create_target_node", 00:05:42.951 "iscsi_get_target_nodes", 00:05:42.951 "iscsi_delete_initiator_group", 00:05:42.951 "iscsi_initiator_group_remove_initiators", 00:05:42.951 "iscsi_initiator_group_add_initiators", 00:05:42.951 "iscsi_create_initiator_group", 00:05:42.951 "iscsi_get_initiator_groups", 00:05:42.951 "nvmf_set_crdt", 00:05:42.951 "nvmf_set_config", 00:05:42.951 "nvmf_set_max_subsystems", 00:05:42.951 "nvmf_stop_mdns_prr", 00:05:42.951 "nvmf_publish_mdns_prr", 00:05:42.951 "nvmf_subsystem_get_listeners", 00:05:42.951 "nvmf_subsystem_get_qpairs", 00:05:42.951 "nvmf_subsystem_get_controllers", 00:05:42.951 "nvmf_get_stats", 00:05:42.951 "nvmf_get_transports", 00:05:42.951 "nvmf_create_transport", 00:05:42.951 "nvmf_get_targets", 00:05:42.951 "nvmf_delete_target", 00:05:42.951 "nvmf_create_target", 00:05:42.951 "nvmf_subsystem_allow_any_host", 00:05:42.951 "nvmf_subsystem_remove_host", 00:05:42.951 "nvmf_subsystem_add_host", 00:05:42.951 "nvmf_ns_remove_host", 00:05:42.951 "nvmf_ns_add_host", 00:05:42.951 "nvmf_subsystem_remove_ns", 00:05:42.951 "nvmf_subsystem_add_ns", 00:05:42.951 "nvmf_subsystem_listener_set_ana_state", 00:05:42.951 "nvmf_discovery_get_referrals", 00:05:42.951 "nvmf_discovery_remove_referral", 00:05:42.951 "nvmf_discovery_add_referral", 00:05:42.951 "nvmf_subsystem_remove_listener", 00:05:42.951 "nvmf_subsystem_add_listener", 00:05:42.951 "nvmf_delete_subsystem", 00:05:42.951 "nvmf_create_subsystem", 00:05:42.951 "nvmf_get_subsystems", 00:05:42.951 "env_dpdk_get_mem_stats", 00:05:42.951 "nbd_get_disks", 00:05:42.951 "nbd_stop_disk", 00:05:42.951 "nbd_start_disk", 00:05:42.951 "ublk_recover_disk", 00:05:42.951 "ublk_get_disks", 00:05:42.951 "ublk_stop_disk", 00:05:42.951 "ublk_start_disk", 00:05:42.951 "ublk_destroy_target", 00:05:42.951 "ublk_create_target", 00:05:42.951 "virtio_blk_create_transport", 00:05:42.951 "virtio_blk_get_transports", 00:05:42.951 "vhost_controller_set_coalescing", 00:05:42.951 "vhost_get_controllers", 00:05:42.951 "vhost_delete_controller", 00:05:42.951 "vhost_create_blk_controller", 00:05:42.951 "vhost_scsi_controller_remove_target", 00:05:42.951 "vhost_scsi_controller_add_target", 00:05:42.951 "vhost_start_scsi_controller", 00:05:42.951 "vhost_create_scsi_controller", 00:05:42.951 "thread_set_cpumask", 00:05:42.951 "framework_get_governor", 00:05:42.951 "framework_get_scheduler", 00:05:42.951 "framework_set_scheduler", 00:05:42.951 "framework_get_reactors", 00:05:42.951 "thread_get_io_channels", 00:05:42.951 "thread_get_pollers", 00:05:42.951 "thread_get_stats", 00:05:42.951 "framework_monitor_context_switch", 00:05:42.951 "spdk_kill_instance", 00:05:42.951 "log_enable_timestamps", 00:05:42.951 "log_get_flags", 00:05:42.951 "log_clear_flag", 00:05:42.951 "log_set_flag", 00:05:42.951 "log_get_level", 00:05:42.951 "log_set_level", 00:05:42.951 "log_get_print_level", 00:05:42.951 "log_set_print_level", 00:05:42.951 "framework_enable_cpumask_locks", 00:05:42.951 "framework_disable_cpumask_locks", 00:05:42.951 "framework_wait_init", 00:05:42.951 "framework_start_init", 00:05:42.951 "scsi_get_devices", 00:05:42.951 "bdev_get_histogram", 00:05:42.951 "bdev_enable_histogram", 00:05:42.951 "bdev_set_qos_limit", 00:05:42.951 "bdev_set_qd_sampling_period", 00:05:42.951 "bdev_get_bdevs", 00:05:42.951 "bdev_reset_iostat", 00:05:42.951 "bdev_get_iostat", 00:05:42.951 "bdev_examine", 00:05:42.951 "bdev_wait_for_examine", 00:05:42.951 "bdev_set_options", 00:05:42.951 "notify_get_notifications", 00:05:42.951 "notify_get_types", 00:05:42.951 "accel_get_stats", 00:05:42.951 "accel_set_options", 00:05:42.951 "accel_set_driver", 00:05:42.951 "accel_crypto_key_destroy", 00:05:42.951 "accel_crypto_keys_get", 00:05:42.951 "accel_crypto_key_create", 00:05:42.951 "accel_assign_opc", 00:05:42.951 "accel_get_module_info", 00:05:42.951 "accel_get_opc_assignments", 00:05:42.951 "vmd_rescan", 00:05:42.951 "vmd_remove_device", 00:05:42.951 "vmd_enable", 00:05:42.951 "sock_get_default_impl", 00:05:42.951 "sock_set_default_impl", 00:05:42.951 "sock_impl_set_options", 00:05:42.951 "sock_impl_get_options", 00:05:42.951 "iobuf_get_stats", 00:05:42.951 "iobuf_set_options", 00:05:42.951 "keyring_get_keys", 00:05:42.951 "framework_get_pci_devices", 00:05:42.951 "framework_get_config", 00:05:42.951 "framework_get_subsystems", 00:05:42.951 "vfu_tgt_set_base_path", 00:05:42.951 "trace_get_info", 00:05:42.951 "trace_get_tpoint_group_mask", 00:05:42.951 "trace_disable_tpoint_group", 00:05:42.951 "trace_enable_tpoint_group", 00:05:42.951 "trace_clear_tpoint_mask", 00:05:42.951 "trace_set_tpoint_mask", 00:05:42.951 "spdk_get_version", 00:05:42.951 "rpc_get_methods" 00:05:42.951 ] 00:05:42.951 15:57:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:42.951 15:57:18 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:42.951 15:57:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.212 15:57:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:43.212 15:57:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2073957 00:05:43.212 15:57:18 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2073957 ']' 00:05:43.212 15:57:18 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2073957 00:05:43.212 15:57:18 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:43.212 15:57:18 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.212 15:57:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2073957 00:05:43.212 15:57:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.212 15:57:18 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.212 15:57:18 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2073957' 00:05:43.212 killing process with pid 2073957 00:05:43.212 15:57:18 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2073957 00:05:43.212 15:57:18 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2073957 00:05:43.472 00:05:43.472 real 0m1.408s 00:05:43.472 user 0m2.576s 00:05:43.472 sys 0m0.432s 00:05:43.472 15:57:19 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.472 15:57:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.472 ************************************ 00:05:43.472 END TEST spdkcli_tcp 00:05:43.472 ************************************ 00:05:43.472 15:57:19 -- common/autotest_common.sh@1142 -- # return 0 00:05:43.472 15:57:19 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.472 15:57:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.472 15:57:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.472 15:57:19 -- common/autotest_common.sh@10 -- # set +x 00:05:43.472 ************************************ 00:05:43.472 START TEST dpdk_mem_utility 00:05:43.472 ************************************ 00:05:43.472 15:57:19 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.472 * Looking for test storage... 00:05:43.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:43.472 15:57:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:43.472 15:57:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2074239 00:05:43.472 15:57:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2074239 00:05:43.472 15:57:19 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2074239 ']' 00:05:43.472 15:57:19 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.472 15:57:19 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.472 15:57:19 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.472 15:57:19 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.472 15:57:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:43.472 15:57:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.472 [2024-07-15 15:57:19.268633] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:05:43.472 [2024-07-15 15:57:19.268690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2074239 ] 00:05:43.472 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.732 [2024-07-15 15:57:19.329678] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.732 [2024-07-15 15:57:19.400640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.302 15:57:20 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.302 15:57:20 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:44.302 15:57:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:44.302 15:57:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:44.302 15:57:20 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.302 15:57:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.302 { 00:05:44.302 "filename": "/tmp/spdk_mem_dump.txt" 00:05:44.302 } 00:05:44.302 15:57:20 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.302 15:57:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:44.302 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:44.302 1 heaps totaling size 814.000000 MiB 00:05:44.302 size: 814.000000 MiB heap id: 0 00:05:44.302 end heaps---------- 00:05:44.302 8 mempools totaling size 598.116089 MiB 00:05:44.302 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:44.302 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:44.302 size: 84.521057 MiB name: bdev_io_2074239 00:05:44.302 size: 51.011292 MiB name: evtpool_2074239 00:05:44.302 size: 50.003479 MiB name: msgpool_2074239 00:05:44.302 size: 21.763794 MiB name: PDU_Pool 00:05:44.302 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:44.302 size: 0.026123 MiB name: Session_Pool 00:05:44.302 end mempools------- 00:05:44.302 6 memzones totaling size 4.142822 MiB 00:05:44.302 size: 1.000366 MiB name: RG_ring_0_2074239 00:05:44.302 size: 1.000366 MiB name: RG_ring_1_2074239 00:05:44.302 size: 1.000366 MiB name: RG_ring_4_2074239 00:05:44.302 size: 1.000366 MiB name: RG_ring_5_2074239 00:05:44.302 size: 0.125366 MiB name: RG_ring_2_2074239 00:05:44.302 size: 0.015991 MiB name: RG_ring_3_2074239 00:05:44.302 end memzones------- 00:05:44.302 15:57:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:44.302 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:44.302 list of free elements. size: 12.519348 MiB 00:05:44.302 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:44.302 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:44.302 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:44.302 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:44.302 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:44.303 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:44.303 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:44.303 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:44.303 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:44.303 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:44.303 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:44.303 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:44.303 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:44.303 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:44.303 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:44.303 list of standard malloc elements. size: 199.218079 MiB 00:05:44.303 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:44.303 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:44.303 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:44.303 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:44.303 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:44.303 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:44.303 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:44.303 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:44.303 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:44.303 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:44.303 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:44.303 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:44.303 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:44.303 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:44.303 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:44.303 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:44.303 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:44.303 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:44.303 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:44.303 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:44.303 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:44.303 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:44.303 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:44.303 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:44.303 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:44.303 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:44.303 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:44.303 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:44.303 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:44.303 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:44.303 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:44.303 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:44.303 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:44.303 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:44.303 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:44.303 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:44.303 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:44.303 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:44.303 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:44.303 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:44.303 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:44.303 list of memzone associated elements. size: 602.262573 MiB 00:05:44.303 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:44.303 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:44.303 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:44.303 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:44.303 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:44.303 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2074239_0 00:05:44.303 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:44.303 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2074239_0 00:05:44.303 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:44.303 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2074239_0 00:05:44.303 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:44.303 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:44.303 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:44.303 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:44.303 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:44.303 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2074239 00:05:44.303 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:44.303 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2074239 00:05:44.303 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:44.303 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2074239 00:05:44.303 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:44.303 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:44.303 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:44.303 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:44.303 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:44.303 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:44.303 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:44.303 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:44.303 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:44.303 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2074239 00:05:44.303 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:44.303 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2074239 00:05:44.303 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:44.303 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2074239 00:05:44.303 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:44.303 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2074239 00:05:44.303 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:44.303 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2074239 00:05:44.303 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:44.303 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:44.303 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:44.303 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:44.303 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:44.303 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:44.303 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:44.303 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2074239 00:05:44.303 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:44.303 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:44.303 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:44.303 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:44.303 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:44.303 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2074239 00:05:44.303 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:44.303 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:44.303 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:44.303 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2074239 00:05:44.303 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:44.303 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2074239 00:05:44.303 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:44.303 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:44.303 15:57:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:44.303 15:57:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2074239 00:05:44.303 15:57:20 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2074239 ']' 00:05:44.303 15:57:20 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2074239 00:05:44.303 15:57:20 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:44.303 15:57:20 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.303 15:57:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2074239 00:05:44.564 15:57:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.564 15:57:20 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.564 15:57:20 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2074239' 00:05:44.564 killing process with pid 2074239 00:05:44.564 15:57:20 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2074239 00:05:44.564 15:57:20 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2074239 00:05:44.564 00:05:44.564 real 0m1.229s 00:05:44.564 user 0m1.295s 00:05:44.564 sys 0m0.339s 00:05:44.564 15:57:20 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.564 15:57:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.564 ************************************ 00:05:44.564 END TEST dpdk_mem_utility 00:05:44.564 ************************************ 00:05:44.564 15:57:20 -- common/autotest_common.sh@1142 -- # return 0 00:05:44.564 15:57:20 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:44.564 15:57:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.564 15:57:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.564 15:57:20 -- common/autotest_common.sh@10 -- # set +x 00:05:44.824 ************************************ 00:05:44.824 START TEST event 00:05:44.824 ************************************ 00:05:44.825 15:57:20 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:44.825 * Looking for test storage... 00:05:44.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:44.825 15:57:20 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:44.825 15:57:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:44.825 15:57:20 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.825 15:57:20 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:44.825 15:57:20 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.825 15:57:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.825 ************************************ 00:05:44.825 START TEST event_perf 00:05:44.825 ************************************ 00:05:44.825 15:57:20 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.825 Running I/O for 1 seconds...[2024-07-15 15:57:20.563016] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:05:44.825 [2024-07-15 15:57:20.563110] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2074512 ] 00:05:44.825 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.825 [2024-07-15 15:57:20.630582] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.085 [2024-07-15 15:57:20.705427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.085 [2024-07-15 15:57:20.705542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.085 [2024-07-15 15:57:20.705696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.085 Running I/O for 1 seconds...[2024-07-15 15:57:20.705697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.025 00:05:46.025 lcore 0: 177809 00:05:46.025 lcore 1: 177808 00:05:46.025 lcore 2: 177808 00:05:46.025 lcore 3: 177811 00:05:46.025 done. 00:05:46.025 00:05:46.025 real 0m1.218s 00:05:46.025 user 0m4.130s 00:05:46.025 sys 0m0.084s 00:05:46.025 15:57:21 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.025 15:57:21 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.025 ************************************ 00:05:46.025 END TEST event_perf 00:05:46.025 ************************************ 00:05:46.025 15:57:21 event -- common/autotest_common.sh@1142 -- # return 0 00:05:46.025 15:57:21 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:46.025 15:57:21 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:46.025 15:57:21 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.025 15:57:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.025 ************************************ 00:05:46.025 START TEST event_reactor 00:05:46.025 ************************************ 00:05:46.025 15:57:21 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:46.025 [2024-07-15 15:57:21.858801] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:05:46.025 [2024-07-15 15:57:21.858890] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2074864 ] 00:05:46.286 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.286 [2024-07-15 15:57:21.923171] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.286 [2024-07-15 15:57:21.985829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.226 test_start 00:05:47.226 oneshot 00:05:47.226 tick 100 00:05:47.226 tick 100 00:05:47.226 tick 250 00:05:47.226 tick 100 00:05:47.226 tick 100 00:05:47.226 tick 100 00:05:47.226 tick 250 00:05:47.226 tick 500 00:05:47.226 tick 100 00:05:47.226 tick 100 00:05:47.226 tick 250 00:05:47.226 tick 100 00:05:47.226 tick 100 00:05:47.226 test_end 00:05:47.226 00:05:47.226 real 0m1.202s 00:05:47.226 user 0m1.128s 00:05:47.226 sys 0m0.070s 00:05:47.226 15:57:23 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.226 15:57:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:47.226 ************************************ 00:05:47.226 END TEST event_reactor 00:05:47.226 ************************************ 00:05:47.497 15:57:23 event -- common/autotest_common.sh@1142 -- # return 0 00:05:47.497 15:57:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.497 15:57:23 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:47.497 15:57:23 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.497 15:57:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.497 ************************************ 00:05:47.497 START TEST event_reactor_perf 00:05:47.497 ************************************ 00:05:47.497 15:57:23 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.497 [2024-07-15 15:57:23.136853] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:05:47.497 [2024-07-15 15:57:23.136955] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2075216 ] 00:05:47.497 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.497 [2024-07-15 15:57:23.199831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.497 [2024-07-15 15:57:23.263441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.886 test_start 00:05:48.886 test_end 00:05:48.886 Performance: 369392 events per second 00:05:48.886 00:05:48.886 real 0m1.200s 00:05:48.886 user 0m1.129s 00:05:48.886 sys 0m0.067s 00:05:48.886 15:57:24 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.886 15:57:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.886 ************************************ 00:05:48.886 END TEST event_reactor_perf 00:05:48.886 ************************************ 00:05:48.886 15:57:24 event -- common/autotest_common.sh@1142 -- # return 0 00:05:48.886 15:57:24 event -- event/event.sh@49 -- # uname -s 00:05:48.886 15:57:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:48.886 15:57:24 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:48.886 15:57:24 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.886 15:57:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.886 15:57:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.886 ************************************ 00:05:48.886 START TEST event_scheduler 00:05:48.886 ************************************ 00:05:48.886 15:57:24 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:48.886 * Looking for test storage... 00:05:48.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:48.886 15:57:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:48.886 15:57:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2075520 00:05:48.886 15:57:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.886 15:57:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:48.886 15:57:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2075520 00:05:48.886 15:57:24 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2075520 ']' 00:05:48.886 15:57:24 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.886 15:57:24 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.886 15:57:24 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.886 15:57:24 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.886 15:57:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.886 [2024-07-15 15:57:24.552355] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:05:48.886 [2024-07-15 15:57:24.552429] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2075520 ] 00:05:48.886 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.886 [2024-07-15 15:57:24.602712] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.886 [2024-07-15 15:57:24.656971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.886 [2024-07-15 15:57:24.657144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.886 [2024-07-15 15:57:24.657241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.886 [2024-07-15 15:57:24.657242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.829 15:57:25 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.829 15:57:25 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:49.829 15:57:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:49.829 15:57:25 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.829 15:57:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.829 [2024-07-15 15:57:25.323534] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:49.829 [2024-07-15 15:57:25.323549] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:49.829 [2024-07-15 15:57:25.323557] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:49.829 [2024-07-15 15:57:25.323562] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:49.829 [2024-07-15 15:57:25.323566] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:49.829 15:57:25 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.829 15:57:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:49.829 15:57:25 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.829 15:57:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.829 [2024-07-15 15:57:25.378092] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:49.829 15:57:25 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.829 15:57:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:49.829 15:57:25 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.829 15:57:25 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.829 15:57:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.829 ************************************ 00:05:49.829 START TEST scheduler_create_thread 00:05:49.829 ************************************ 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.829 2 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.829 3 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.829 4 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.829 5 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.829 6 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.829 7 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.829 8 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.829 9 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.829 10 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.829 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.401 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.401 15:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:50.401 15:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:50.401 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.401 15:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.344 15:57:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.344 15:57:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:51.344 15:57:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.344 15:57:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.286 15:57:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.286 15:57:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:52.286 15:57:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:52.286 15:57:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.286 15:57:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.858 15:57:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.858 00:05:52.858 real 0m3.235s 00:05:52.858 user 0m0.021s 00:05:52.858 sys 0m0.010s 00:05:52.858 15:57:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.858 15:57:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.858 ************************************ 00:05:52.858 END TEST scheduler_create_thread 00:05:52.858 ************************************ 00:05:52.858 15:57:28 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:52.858 15:57:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:52.858 15:57:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2075520 00:05:52.858 15:57:28 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2075520 ']' 00:05:52.858 15:57:28 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2075520 00:05:52.858 15:57:28 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:52.858 15:57:28 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.858 15:57:28 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2075520 00:05:53.119 15:57:28 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:53.119 15:57:28 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:53.119 15:57:28 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2075520' 00:05:53.119 killing process with pid 2075520 00:05:53.119 15:57:28 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2075520 00:05:53.119 15:57:28 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2075520 00:05:53.380 [2024-07-15 15:57:29.030455] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:53.380 00:05:53.380 real 0m4.816s 00:05:53.380 user 0m9.956s 00:05:53.380 sys 0m0.349s 00:05:53.380 15:57:29 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.380 15:57:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.380 ************************************ 00:05:53.380 END TEST event_scheduler 00:05:53.380 ************************************ 00:05:53.641 15:57:29 event -- common/autotest_common.sh@1142 -- # return 0 00:05:53.641 15:57:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:53.641 15:57:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:53.641 15:57:29 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.641 15:57:29 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.641 15:57:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.641 ************************************ 00:05:53.641 START TEST app_repeat 00:05:53.641 ************************************ 00:05:53.641 15:57:29 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:53.641 15:57:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.641 15:57:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.641 15:57:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:53.641 15:57:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.641 15:57:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:53.641 15:57:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:53.641 15:57:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:53.641 15:57:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2076481 00:05:53.641 15:57:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.641 15:57:29 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:53.641 15:57:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2076481' 00:05:53.641 Process app_repeat pid: 2076481 00:05:53.641 15:57:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.641 15:57:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:53.641 spdk_app_start Round 0 00:05:53.641 15:57:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2076481 /var/tmp/spdk-nbd.sock 00:05:53.641 15:57:29 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2076481 ']' 00:05:53.641 15:57:29 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.641 15:57:29 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.641 15:57:29 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.641 15:57:29 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.641 15:57:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.641 [2024-07-15 15:57:29.328049] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:05:53.641 [2024-07-15 15:57:29.328116] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2076481 ] 00:05:53.641 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.641 [2024-07-15 15:57:29.390466] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.641 [2024-07-15 15:57:29.460653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.641 [2024-07-15 15:57:29.460655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.583 15:57:30 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.583 15:57:30 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:54.583 15:57:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.583 Malloc0 00:05:54.583 15:57:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.847 Malloc1 00:05:54.847 15:57:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.847 /dev/nbd0 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.847 15:57:30 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:54.847 15:57:30 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:54.847 15:57:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:54.847 15:57:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:54.847 15:57:30 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:54.847 15:57:30 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:54.847 15:57:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:54.847 15:57:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:54.847 15:57:30 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.847 1+0 records in 00:05:54.847 1+0 records out 00:05:54.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267669 s, 15.3 MB/s 00:05:54.847 15:57:30 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.847 15:57:30 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:54.847 15:57:30 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.847 15:57:30 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:54.847 15:57:30 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.847 15:57:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.144 /dev/nbd1 00:05:55.144 15:57:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.144 15:57:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.144 15:57:30 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:55.144 15:57:30 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:55.144 15:57:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.144 15:57:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.144 15:57:30 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:55.144 15:57:30 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:55.144 15:57:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.144 15:57:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.144 15:57:30 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.144 1+0 records in 00:05:55.144 1+0 records out 00:05:55.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031325 s, 13.1 MB/s 00:05:55.144 15:57:30 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.144 15:57:30 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:55.144 15:57:30 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.144 15:57:30 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.144 15:57:30 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:55.144 15:57:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.144 15:57:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.144 15:57:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.144 15:57:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.144 15:57:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.418 { 00:05:55.418 "nbd_device": "/dev/nbd0", 00:05:55.418 "bdev_name": "Malloc0" 00:05:55.418 }, 00:05:55.418 { 00:05:55.418 "nbd_device": "/dev/nbd1", 00:05:55.418 "bdev_name": "Malloc1" 00:05:55.418 } 00:05:55.418 ]' 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.418 { 00:05:55.418 "nbd_device": "/dev/nbd0", 00:05:55.418 "bdev_name": "Malloc0" 00:05:55.418 }, 00:05:55.418 { 00:05:55.418 "nbd_device": "/dev/nbd1", 00:05:55.418 "bdev_name": "Malloc1" 00:05:55.418 } 00:05:55.418 ]' 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.418 /dev/nbd1' 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.418 /dev/nbd1' 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.418 256+0 records in 00:05:55.418 256+0 records out 00:05:55.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117243 s, 89.4 MB/s 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.418 256+0 records in 00:05:55.418 256+0 records out 00:05:55.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159685 s, 65.7 MB/s 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.418 15:57:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.418 256+0 records in 00:05:55.419 256+0 records out 00:05:55.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253461 s, 41.4 MB/s 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.419 15:57:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.684 15:57:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.945 15:57:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.945 15:57:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.945 15:57:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.945 15:57:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.945 15:57:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.945 15:57:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.945 15:57:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.945 15:57:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.945 15:57:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.945 15:57:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.945 15:57:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.945 15:57:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.945 15:57:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.205 15:57:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.205 [2024-07-15 15:57:32.009768] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.465 [2024-07-15 15:57:32.073824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.465 [2024-07-15 15:57:32.073826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.465 [2024-07-15 15:57:32.105213] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.465 [2024-07-15 15:57:32.105248] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.773 15:57:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.773 15:57:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:59.773 spdk_app_start Round 1 00:05:59.773 15:57:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2076481 /var/tmp/spdk-nbd.sock 00:05:59.773 15:57:34 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2076481 ']' 00:05:59.773 15:57:34 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.773 15:57:34 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.773 15:57:34 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.773 15:57:34 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.773 15:57:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.773 15:57:35 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.773 15:57:35 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:59.773 15:57:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.773 Malloc0 00:05:59.773 15:57:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.773 Malloc1 00:05:59.773 15:57:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.773 15:57:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.773 15:57:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.773 15:57:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.773 15:57:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.773 15:57:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.773 15:57:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.773 15:57:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.774 15:57:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.774 15:57:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.774 15:57:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.774 15:57:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.774 15:57:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.774 15:57:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.774 15:57:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.774 15:57:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.774 /dev/nbd0 00:05:59.774 15:57:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.774 15:57:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.774 15:57:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:59.774 15:57:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:59.774 15:57:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:59.774 15:57:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:59.774 15:57:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:59.774 15:57:35 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:59.774 15:57:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:59.774 15:57:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:59.774 15:57:35 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.774 1+0 records in 00:05:59.774 1+0 records out 00:05:59.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264044 s, 15.5 MB/s 00:05:59.774 15:57:35 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.774 15:57:35 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:59.774 15:57:35 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.774 15:57:35 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:59.774 15:57:35 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:59.774 15:57:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.774 15:57:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.774 15:57:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.034 /dev/nbd1 00:06:00.034 15:57:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.034 15:57:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.034 15:57:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:00.034 15:57:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:00.034 15:57:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:00.034 15:57:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:00.034 15:57:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:00.034 15:57:35 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:00.034 15:57:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:00.034 15:57:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:00.034 15:57:35 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.034 1+0 records in 00:06:00.034 1+0 records out 00:06:00.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266075 s, 15.4 MB/s 00:06:00.034 15:57:35 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.034 15:57:35 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:00.034 15:57:35 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.034 15:57:35 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:00.034 15:57:35 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:00.034 15:57:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.034 15:57:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.034 15:57:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.034 15:57:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.034 15:57:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.295 { 00:06:00.295 "nbd_device": "/dev/nbd0", 00:06:00.295 "bdev_name": "Malloc0" 00:06:00.295 }, 00:06:00.295 { 00:06:00.295 "nbd_device": "/dev/nbd1", 00:06:00.295 "bdev_name": "Malloc1" 00:06:00.295 } 00:06:00.295 ]' 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.295 { 00:06:00.295 "nbd_device": "/dev/nbd0", 00:06:00.295 "bdev_name": "Malloc0" 00:06:00.295 }, 00:06:00.295 { 00:06:00.295 "nbd_device": "/dev/nbd1", 00:06:00.295 "bdev_name": "Malloc1" 00:06:00.295 } 00:06:00.295 ]' 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.295 /dev/nbd1' 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.295 /dev/nbd1' 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.295 256+0 records in 00:06:00.295 256+0 records out 00:06:00.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116406 s, 90.1 MB/s 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.295 256+0 records in 00:06:00.295 256+0 records out 00:06:00.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019319 s, 54.3 MB/s 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.295 15:57:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.295 256+0 records in 00:06:00.295 256+0 records out 00:06:00.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216441 s, 48.4 MB/s 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.295 15:57:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.557 15:57:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.818 15:57:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.818 15:57:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.818 15:57:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.818 15:57:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.818 15:57:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.818 15:57:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.818 15:57:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.818 15:57:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.818 15:57:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.818 15:57:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.818 15:57:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.818 15:57:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.818 15:57:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.078 15:57:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.078 [2024-07-15 15:57:36.907703] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.339 [2024-07-15 15:57:36.970885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.339 [2024-07-15 15:57:36.970887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.339 [2024-07-15 15:57:37.002982] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.339 [2024-07-15 15:57:37.003016] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.639 15:57:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.639 15:57:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:04.639 spdk_app_start Round 2 00:06:04.639 15:57:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2076481 /var/tmp/spdk-nbd.sock 00:06:04.639 15:57:39 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2076481 ']' 00:06:04.639 15:57:39 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.639 15:57:39 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.639 15:57:39 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.639 15:57:39 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.639 15:57:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.639 15:57:39 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.639 15:57:39 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:04.639 15:57:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.639 Malloc0 00:06:04.639 15:57:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.639 Malloc1 00:06:04.639 15:57:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.639 /dev/nbd0 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.639 15:57:40 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:04.639 15:57:40 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:04.639 15:57:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:04.639 15:57:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:04.639 15:57:40 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:04.639 15:57:40 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:04.639 15:57:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:04.639 15:57:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:04.639 15:57:40 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.639 1+0 records in 00:06:04.639 1+0 records out 00:06:04.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212027 s, 19.3 MB/s 00:06:04.639 15:57:40 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.639 15:57:40 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:04.639 15:57:40 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.639 15:57:40 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:04.639 15:57:40 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.639 15:57:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.900 /dev/nbd1 00:06:04.900 15:57:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.900 15:57:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.900 15:57:40 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:04.900 15:57:40 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:04.900 15:57:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:04.900 15:57:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:04.900 15:57:40 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:04.900 15:57:40 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:04.900 15:57:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:04.900 15:57:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:04.900 15:57:40 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.900 1+0 records in 00:06:04.900 1+0 records out 00:06:04.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000110847 s, 37.0 MB/s 00:06:04.900 15:57:40 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.900 15:57:40 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:04.900 15:57:40 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.900 15:57:40 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:04.900 15:57:40 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:04.900 15:57:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.900 15:57:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.900 15:57:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.900 15:57:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.900 15:57:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.212 { 00:06:05.212 "nbd_device": "/dev/nbd0", 00:06:05.212 "bdev_name": "Malloc0" 00:06:05.212 }, 00:06:05.212 { 00:06:05.212 "nbd_device": "/dev/nbd1", 00:06:05.212 "bdev_name": "Malloc1" 00:06:05.212 } 00:06:05.212 ]' 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.212 { 00:06:05.212 "nbd_device": "/dev/nbd0", 00:06:05.212 "bdev_name": "Malloc0" 00:06:05.212 }, 00:06:05.212 { 00:06:05.212 "nbd_device": "/dev/nbd1", 00:06:05.212 "bdev_name": "Malloc1" 00:06:05.212 } 00:06:05.212 ]' 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.212 /dev/nbd1' 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.212 /dev/nbd1' 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.212 256+0 records in 00:06:05.212 256+0 records out 00:06:05.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012465 s, 84.1 MB/s 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.212 256+0 records in 00:06:05.212 256+0 records out 00:06:05.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160256 s, 65.4 MB/s 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.212 256+0 records in 00:06:05.212 256+0 records out 00:06:05.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207227 s, 50.6 MB/s 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.212 15:57:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.472 15:57:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.732 15:57:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.732 15:57:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.732 15:57:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.732 15:57:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.732 15:57:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.732 15:57:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.732 15:57:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.732 15:57:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.732 15:57:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.732 15:57:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.732 15:57:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.732 15:57:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.732 15:57:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.992 15:57:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.992 [2024-07-15 15:57:41.782588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.250 [2024-07-15 15:57:41.846250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.251 [2024-07-15 15:57:41.846252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.251 [2024-07-15 15:57:41.877574] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.251 [2024-07-15 15:57:41.877608] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.549 15:57:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2076481 /var/tmp/spdk-nbd.sock 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2076481 ']' 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:09.549 15:57:44 event.app_repeat -- event/event.sh@39 -- # killprocess 2076481 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2076481 ']' 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2076481 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2076481 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2076481' 00:06:09.549 killing process with pid 2076481 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2076481 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2076481 00:06:09.549 spdk_app_start is called in Round 0. 00:06:09.549 Shutdown signal received, stop current app iteration 00:06:09.549 Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 reinitialization... 00:06:09.549 spdk_app_start is called in Round 1. 00:06:09.549 Shutdown signal received, stop current app iteration 00:06:09.549 Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 reinitialization... 00:06:09.549 spdk_app_start is called in Round 2. 00:06:09.549 Shutdown signal received, stop current app iteration 00:06:09.549 Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 reinitialization... 00:06:09.549 spdk_app_start is called in Round 3. 00:06:09.549 Shutdown signal received, stop current app iteration 00:06:09.549 15:57:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:09.549 15:57:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:09.549 00:06:09.549 real 0m15.685s 00:06:09.549 user 0m33.843s 00:06:09.549 sys 0m2.106s 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.549 15:57:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.549 ************************************ 00:06:09.549 END TEST app_repeat 00:06:09.549 ************************************ 00:06:09.549 15:57:45 event -- common/autotest_common.sh@1142 -- # return 0 00:06:09.549 15:57:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:09.549 15:57:45 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:09.549 15:57:45 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.549 15:57:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.549 15:57:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.549 ************************************ 00:06:09.549 START TEST cpu_locks 00:06:09.549 ************************************ 00:06:09.549 15:57:45 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:09.549 * Looking for test storage... 00:06:09.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:09.549 15:57:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:09.549 15:57:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:09.549 15:57:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:09.549 15:57:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:09.549 15:57:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.549 15:57:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.549 15:57:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.549 ************************************ 00:06:09.549 START TEST default_locks 00:06:09.549 ************************************ 00:06:09.549 15:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:09.549 15:57:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2079907 00:06:09.549 15:57:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2079907 00:06:09.549 15:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2079907 ']' 00:06:09.549 15:57:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.549 15:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.549 15:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.549 15:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.549 15:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.549 15:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.549 [2024-07-15 15:57:45.250150] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:09.549 [2024-07-15 15:57:45.250213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2079907 ] 00:06:09.549 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.549 [2024-07-15 15:57:45.310469] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.549 [2024-07-15 15:57:45.374300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.513 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.513 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:10.513 15:57:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2079907 00:06:10.513 15:57:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2079907 00:06:10.513 15:57:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.774 lslocks: write error 00:06:10.774 15:57:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2079907 00:06:10.774 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2079907 ']' 00:06:10.774 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2079907 00:06:10.774 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:10.774 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.774 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2079907 00:06:10.774 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.774 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.774 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2079907' 00:06:10.774 killing process with pid 2079907 00:06:10.774 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2079907 00:06:10.774 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2079907 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2079907 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2079907 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2079907 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2079907 ']' 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2079907) - No such process 00:06:11.035 ERROR: process (pid: 2079907) is no longer running 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.035 00:06:11.035 real 0m1.463s 00:06:11.035 user 0m1.558s 00:06:11.035 sys 0m0.480s 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.035 15:57:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.035 ************************************ 00:06:11.035 END TEST default_locks 00:06:11.035 ************************************ 00:06:11.035 15:57:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:11.035 15:57:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:11.036 15:57:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.036 15:57:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.036 15:57:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.036 ************************************ 00:06:11.036 START TEST default_locks_via_rpc 00:06:11.036 ************************************ 00:06:11.036 15:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:11.036 15:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2080280 00:06:11.036 15:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2080280 00:06:11.036 15:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.036 15:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2080280 ']' 00:06:11.036 15:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.036 15:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.036 15:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.036 15:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.036 15:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.036 [2024-07-15 15:57:46.787908] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:11.036 [2024-07-15 15:57:46.787967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080280 ] 00:06:11.036 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.036 [2024-07-15 15:57:46.849904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.296 [2024-07-15 15:57:46.923387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.868 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.868 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:11.868 15:57:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:11.868 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.868 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.868 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.868 15:57:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:11.868 15:57:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.868 15:57:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.868 15:57:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.868 15:57:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.868 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.868 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.868 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.868 15:57:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2080280 00:06:11.868 15:57:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2080280 00:06:11.868 15:57:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.128 15:57:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2080280 00:06:12.129 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2080280 ']' 00:06:12.129 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2080280 00:06:12.129 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:12.129 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.129 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2080280 00:06:12.389 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.390 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.390 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2080280' 00:06:12.390 killing process with pid 2080280 00:06:12.390 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2080280 00:06:12.390 15:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2080280 00:06:12.390 00:06:12.390 real 0m1.479s 00:06:12.390 user 0m1.559s 00:06:12.390 sys 0m0.498s 00:06:12.390 15:57:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.390 15:57:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.390 ************************************ 00:06:12.390 END TEST default_locks_via_rpc 00:06:12.390 ************************************ 00:06:12.650 15:57:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:12.650 15:57:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:12.650 15:57:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.650 15:57:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.650 15:57:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.650 ************************************ 00:06:12.650 START TEST non_locking_app_on_locked_coremask 00:06:12.650 ************************************ 00:06:12.650 15:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:12.650 15:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2080640 00:06:12.650 15:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2080640 /var/tmp/spdk.sock 00:06:12.650 15:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.650 15:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2080640 ']' 00:06:12.650 15:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.650 15:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.650 15:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.650 15:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.650 15:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.650 [2024-07-15 15:57:48.339255] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:12.650 [2024-07-15 15:57:48.339307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080640 ] 00:06:12.650 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.650 [2024-07-15 15:57:48.398137] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.650 [2024-07-15 15:57:48.465252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.593 15:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.593 15:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:13.593 15:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2080653 00:06:13.593 15:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2080653 /var/tmp/spdk2.sock 00:06:13.593 15:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2080653 ']' 00:06:13.593 15:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:13.593 15:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.593 15:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.593 15:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.593 15:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.593 15:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.593 [2024-07-15 15:57:49.138485] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:13.593 [2024-07-15 15:57:49.138536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080653 ] 00:06:13.593 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.593 [2024-07-15 15:57:49.227543] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.593 [2024-07-15 15:57:49.227570] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.593 [2024-07-15 15:57:49.356896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.219 15:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.219 15:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:14.219 15:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2080640 00:06:14.219 15:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.219 15:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2080640 00:06:14.790 lslocks: write error 00:06:14.790 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2080640 00:06:14.790 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2080640 ']' 00:06:14.790 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2080640 00:06:14.790 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:14.790 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.790 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2080640 00:06:14.790 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.790 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.790 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2080640' 00:06:14.790 killing process with pid 2080640 00:06:14.790 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2080640 00:06:14.790 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2080640 00:06:15.361 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2080653 00:06:15.361 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2080653 ']' 00:06:15.361 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2080653 00:06:15.361 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:15.361 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.361 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2080653 00:06:15.361 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.361 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.361 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2080653' 00:06:15.361 killing process with pid 2080653 00:06:15.361 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2080653 00:06:15.361 15:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2080653 00:06:15.361 00:06:15.361 real 0m2.909s 00:06:15.361 user 0m3.171s 00:06:15.361 sys 0m0.878s 00:06:15.361 15:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.361 15:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.361 ************************************ 00:06:15.361 END TEST non_locking_app_on_locked_coremask 00:06:15.361 ************************************ 00:06:15.622 15:57:51 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:15.622 15:57:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:15.622 15:57:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.622 15:57:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.622 15:57:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.622 ************************************ 00:06:15.622 START TEST locking_app_on_unlocked_coremask 00:06:15.622 ************************************ 00:06:15.622 15:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:15.622 15:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2081201 00:06:15.622 15:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2081201 /var/tmp/spdk.sock 00:06:15.622 15:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:15.622 15:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2081201 ']' 00:06:15.622 15:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.622 15:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.622 15:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.622 15:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.622 15:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.622 [2024-07-15 15:57:51.316895] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:15.622 [2024-07-15 15:57:51.316956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081201 ] 00:06:15.622 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.622 [2024-07-15 15:57:51.380696] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.622 [2024-07-15 15:57:51.380737] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.622 [2024-07-15 15:57:51.452343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.566 15:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.566 15:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:16.566 15:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.566 15:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2081365 00:06:16.566 15:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2081365 /var/tmp/spdk2.sock 00:06:16.566 15:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2081365 ']' 00:06:16.566 15:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.566 15:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.566 15:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.566 15:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.566 15:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.566 [2024-07-15 15:57:52.135640] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:16.566 [2024-07-15 15:57:52.135692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081365 ] 00:06:16.566 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.566 [2024-07-15 15:57:52.225160] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.566 [2024-07-15 15:57:52.354057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.138 15:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.138 15:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:17.138 15:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2081365 00:06:17.138 15:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2081365 00:06:17.138 15:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.711 lslocks: write error 00:06:17.711 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2081201 00:06:17.711 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2081201 ']' 00:06:17.711 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2081201 00:06:17.711 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:17.711 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.711 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2081201 00:06:17.711 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.711 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.711 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2081201' 00:06:17.711 killing process with pid 2081201 00:06:17.711 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2081201 00:06:17.711 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2081201 00:06:17.971 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2081365 00:06:17.971 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2081365 ']' 00:06:17.971 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2081365 00:06:17.971 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:17.971 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.971 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2081365 00:06:18.231 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.231 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.231 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2081365' 00:06:18.231 killing process with pid 2081365 00:06:18.231 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2081365 00:06:18.231 15:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2081365 00:06:18.231 00:06:18.231 real 0m2.801s 00:06:18.231 user 0m3.065s 00:06:18.231 sys 0m0.815s 00:06:18.231 15:57:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.231 15:57:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.231 ************************************ 00:06:18.231 END TEST locking_app_on_unlocked_coremask 00:06:18.231 ************************************ 00:06:18.492 15:57:54 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:18.492 15:57:54 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:18.492 15:57:54 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.492 15:57:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.492 15:57:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.492 ************************************ 00:06:18.492 START TEST locking_app_on_locked_coremask 00:06:18.492 ************************************ 00:06:18.492 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:18.492 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2081740 00:06:18.492 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2081740 /var/tmp/spdk.sock 00:06:18.492 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.492 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2081740 ']' 00:06:18.492 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.492 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.492 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.492 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.492 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.492 [2024-07-15 15:57:54.187486] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:18.492 [2024-07-15 15:57:54.187534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081740 ] 00:06:18.492 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.492 [2024-07-15 15:57:54.246751] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.492 [2024-07-15 15:57:54.309911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2082072 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2082072 /var/tmp/spdk2.sock 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2082072 /var/tmp/spdk2.sock 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2082072 /var/tmp/spdk2.sock 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2082072 ']' 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.170 15:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.170 [2024-07-15 15:57:55.008949] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:19.170 [2024-07-15 15:57:55.009004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082072 ] 00:06:19.430 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.430 [2024-07-15 15:57:55.097953] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2081740 has claimed it. 00:06:19.430 [2024-07-15 15:57:55.097999] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2082072) - No such process 00:06:20.001 ERROR: process (pid: 2082072) is no longer running 00:06:20.001 15:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.001 15:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:20.001 15:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:20.001 15:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.001 15:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:20.001 15:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.001 15:57:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2081740 00:06:20.001 15:57:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2081740 00:06:20.001 15:57:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.262 lslocks: write error 00:06:20.262 15:57:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2081740 00:06:20.262 15:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2081740 ']' 00:06:20.262 15:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2081740 00:06:20.262 15:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:20.262 15:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.262 15:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2081740 00:06:20.262 15:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.262 15:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.262 15:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2081740' 00:06:20.262 killing process with pid 2081740 00:06:20.262 15:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2081740 00:06:20.262 15:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2081740 00:06:20.522 00:06:20.522 real 0m2.173s 00:06:20.522 user 0m2.417s 00:06:20.522 sys 0m0.598s 00:06:20.522 15:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.522 15:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.522 ************************************ 00:06:20.522 END TEST locking_app_on_locked_coremask 00:06:20.522 ************************************ 00:06:20.522 15:57:56 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:20.522 15:57:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:20.522 15:57:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.522 15:57:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.522 15:57:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.782 ************************************ 00:06:20.782 START TEST locking_overlapped_coremask 00:06:20.782 ************************************ 00:06:20.782 15:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:20.782 15:57:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2082352 00:06:20.782 15:57:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2082352 /var/tmp/spdk.sock 00:06:20.782 15:57:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:20.782 15:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2082352 ']' 00:06:20.782 15:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.782 15:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.783 15:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.783 15:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.783 15:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.783 [2024-07-15 15:57:56.432856] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:20.783 [2024-07-15 15:57:56.432910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082352 ] 00:06:20.783 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.783 [2024-07-15 15:57:56.496779] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.783 [2024-07-15 15:57:56.569671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.783 [2024-07-15 15:57:56.569794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.783 [2024-07-15 15:57:56.569796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.725 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.725 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:21.725 15:57:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:21.725 15:57:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2082448 00:06:21.725 15:57:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2082448 /var/tmp/spdk2.sock 00:06:21.725 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:21.725 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2082448 /var/tmp/spdk2.sock 00:06:21.725 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:21.725 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.725 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:21.725 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.725 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2082448 /var/tmp/spdk2.sock 00:06:21.725 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2082448 ']' 00:06:21.725 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.725 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.725 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.725 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.726 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.726 [2024-07-15 15:57:57.249474] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:21.726 [2024-07-15 15:57:57.249526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082448 ] 00:06:21.726 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.726 [2024-07-15 15:57:57.320710] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2082352 has claimed it. 00:06:21.726 [2024-07-15 15:57:57.320745] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:22.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2082448) - No such process 00:06:22.297 ERROR: process (pid: 2082448) is no longer running 00:06:22.297 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.297 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:22.297 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:22.297 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.297 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.297 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.297 15:57:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:22.297 15:57:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:22.297 15:57:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:22.298 15:57:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:22.298 15:57:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2082352 00:06:22.298 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2082352 ']' 00:06:22.298 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2082352 00:06:22.298 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:22.298 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.298 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2082352 00:06:22.298 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.298 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.298 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2082352' 00:06:22.298 killing process with pid 2082352 00:06:22.298 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2082352 00:06:22.298 15:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2082352 00:06:22.298 00:06:22.298 real 0m1.755s 00:06:22.298 user 0m4.954s 00:06:22.298 sys 0m0.366s 00:06:22.298 15:57:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.298 15:57:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.298 ************************************ 00:06:22.298 END TEST locking_overlapped_coremask 00:06:22.298 ************************************ 00:06:22.559 15:57:58 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:22.559 15:57:58 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:22.559 15:57:58 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.559 15:57:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.559 15:57:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.559 ************************************ 00:06:22.559 START TEST locking_overlapped_coremask_via_rpc 00:06:22.559 ************************************ 00:06:22.559 15:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:22.559 15:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2082805 00:06:22.559 15:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2082805 /var/tmp/spdk.sock 00:06:22.559 15:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:22.559 15:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2082805 ']' 00:06:22.559 15:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.559 15:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.559 15:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.559 15:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.559 15:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.559 [2024-07-15 15:57:58.270921] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:22.559 [2024-07-15 15:57:58.270986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082805 ] 00:06:22.559 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.559 [2024-07-15 15:57:58.332387] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.559 [2024-07-15 15:57:58.332419] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.819 [2024-07-15 15:57:58.405616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.819 [2024-07-15 15:57:58.405734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.819 [2024-07-15 15:57:58.405736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.390 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.390 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:23.390 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:23.390 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2082826 00:06:23.390 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2082826 /var/tmp/spdk2.sock 00:06:23.390 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2082826 ']' 00:06:23.390 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.390 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.390 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.390 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.390 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.390 [2024-07-15 15:57:59.087877] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:23.390 [2024-07-15 15:57:59.087930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082826 ] 00:06:23.390 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.390 [2024-07-15 15:57:59.158346] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.390 [2024-07-15 15:57:59.158373] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.650 [2024-07-15 15:57:59.268526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.650 [2024-07-15 15:57:59.268681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.650 [2024-07-15 15:57:59.268683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.222 [2024-07-15 15:57:59.877190] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2082805 has claimed it. 00:06:24.222 request: 00:06:24.222 { 00:06:24.222 "method": "framework_enable_cpumask_locks", 00:06:24.222 "req_id": 1 00:06:24.222 } 00:06:24.222 Got JSON-RPC error response 00:06:24.222 response: 00:06:24.222 { 00:06:24.222 "code": -32603, 00:06:24.222 "message": "Failed to claim CPU core: 2" 00:06:24.222 } 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2082805 /var/tmp/spdk.sock 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2082805 ']' 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.222 15:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.222 15:58:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.222 15:58:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:24.222 15:58:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2082826 /var/tmp/spdk2.sock 00:06:24.222 15:58:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2082826 ']' 00:06:24.222 15:58:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.222 15:58:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.483 15:58:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.483 15:58:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.483 15:58:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.483 15:58:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.483 15:58:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:24.483 15:58:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:24.483 15:58:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:24.483 15:58:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:24.483 15:58:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:24.483 00:06:24.483 real 0m2.011s 00:06:24.483 user 0m0.766s 00:06:24.483 sys 0m0.170s 00:06:24.483 15:58:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.483 15:58:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.483 ************************************ 00:06:24.483 END TEST locking_overlapped_coremask_via_rpc 00:06:24.483 ************************************ 00:06:24.483 15:58:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:24.483 15:58:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:24.483 15:58:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2082805 ]] 00:06:24.483 15:58:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2082805 00:06:24.483 15:58:00 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2082805 ']' 00:06:24.483 15:58:00 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2082805 00:06:24.483 15:58:00 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:24.483 15:58:00 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.483 15:58:00 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2082805 00:06:24.483 15:58:00 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.483 15:58:00 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.483 15:58:00 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2082805' 00:06:24.483 killing process with pid 2082805 00:06:24.483 15:58:00 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2082805 00:06:24.483 15:58:00 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2082805 00:06:24.744 15:58:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2082826 ]] 00:06:24.744 15:58:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2082826 00:06:24.744 15:58:00 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2082826 ']' 00:06:24.744 15:58:00 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2082826 00:06:24.744 15:58:00 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:24.744 15:58:00 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.744 15:58:00 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2082826 00:06:24.744 15:58:00 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:24.744 15:58:00 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:24.744 15:58:00 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2082826' 00:06:24.744 killing process with pid 2082826 00:06:24.744 15:58:00 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2082826 00:06:24.744 15:58:00 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2082826 00:06:25.004 15:58:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:25.004 15:58:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:25.004 15:58:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2082805 ]] 00:06:25.004 15:58:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2082805 00:06:25.004 15:58:00 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2082805 ']' 00:06:25.004 15:58:00 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2082805 00:06:25.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2082805) - No such process 00:06:25.004 15:58:00 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2082805 is not found' 00:06:25.004 Process with pid 2082805 is not found 00:06:25.004 15:58:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2082826 ]] 00:06:25.004 15:58:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2082826 00:06:25.004 15:58:00 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2082826 ']' 00:06:25.004 15:58:00 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2082826 00:06:25.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2082826) - No such process 00:06:25.004 15:58:00 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2082826 is not found' 00:06:25.004 Process with pid 2082826 is not found 00:06:25.004 15:58:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:25.004 00:06:25.004 real 0m15.724s 00:06:25.004 user 0m27.027s 00:06:25.004 sys 0m4.687s 00:06:25.004 15:58:00 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.004 15:58:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.004 ************************************ 00:06:25.004 END TEST cpu_locks 00:06:25.004 ************************************ 00:06:25.004 15:58:00 event -- common/autotest_common.sh@1142 -- # return 0 00:06:25.004 00:06:25.004 real 0m40.388s 00:06:25.004 user 1m17.408s 00:06:25.004 sys 0m7.737s 00:06:25.004 15:58:00 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.004 15:58:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.004 ************************************ 00:06:25.004 END TEST event 00:06:25.004 ************************************ 00:06:25.264 15:58:00 -- common/autotest_common.sh@1142 -- # return 0 00:06:25.264 15:58:00 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:25.264 15:58:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.264 15:58:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.264 15:58:00 -- common/autotest_common.sh@10 -- # set +x 00:06:25.264 ************************************ 00:06:25.264 START TEST thread 00:06:25.264 ************************************ 00:06:25.264 15:58:00 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:25.264 * Looking for test storage... 00:06:25.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:25.264 15:58:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:25.264 15:58:00 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:25.264 15:58:00 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.264 15:58:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.264 ************************************ 00:06:25.264 START TEST thread_poller_perf 00:06:25.264 ************************************ 00:06:25.264 15:58:01 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:25.264 [2024-07-15 15:58:01.042454] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:25.264 [2024-07-15 15:58:01.042550] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2083305 ] 00:06:25.264 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.524 [2024-07-15 15:58:01.109340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.524 [2024-07-15 15:58:01.177757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.524 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:26.463 ====================================== 00:06:26.463 busy:2407209552 (cyc) 00:06:26.463 total_run_count: 287000 00:06:26.463 tsc_hz: 2400000000 (cyc) 00:06:26.463 ====================================== 00:06:26.463 poller_cost: 8387 (cyc), 3494 (nsec) 00:06:26.463 00:06:26.463 real 0m1.217s 00:06:26.463 user 0m1.142s 00:06:26.463 sys 0m0.071s 00:06:26.463 15:58:02 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.463 15:58:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.463 ************************************ 00:06:26.463 END TEST thread_poller_perf 00:06:26.463 ************************************ 00:06:26.463 15:58:02 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:26.463 15:58:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.463 15:58:02 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:26.463 15:58:02 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.463 15:58:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.724 ************************************ 00:06:26.724 START TEST thread_poller_perf 00:06:26.724 ************************************ 00:06:26.724 15:58:02 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.724 [2024-07-15 15:58:02.333843] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:26.724 [2024-07-15 15:58:02.333936] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2083611 ] 00:06:26.724 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.724 [2024-07-15 15:58:02.396744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.724 [2024-07-15 15:58:02.459066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.724 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:27.678 ====================================== 00:06:27.678 busy:2402090746 (cyc) 00:06:27.678 total_run_count: 3808000 00:06:27.678 tsc_hz: 2400000000 (cyc) 00:06:27.678 ====================================== 00:06:27.678 poller_cost: 630 (cyc), 262 (nsec) 00:06:27.678 00:06:27.678 real 0m1.202s 00:06:27.678 user 0m1.126s 00:06:27.678 sys 0m0.072s 00:06:27.678 15:58:03 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.678 15:58:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.678 ************************************ 00:06:27.678 END TEST thread_poller_perf 00:06:27.678 ************************************ 00:06:27.939 15:58:03 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:27.939 15:58:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:27.939 00:06:27.939 real 0m2.667s 00:06:27.939 user 0m2.362s 00:06:27.939 sys 0m0.313s 00:06:27.939 15:58:03 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.939 15:58:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.939 ************************************ 00:06:27.939 END TEST thread 00:06:27.939 ************************************ 00:06:27.939 15:58:03 -- common/autotest_common.sh@1142 -- # return 0 00:06:27.939 15:58:03 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:27.939 15:58:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.939 15:58:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.939 15:58:03 -- common/autotest_common.sh@10 -- # set +x 00:06:27.939 ************************************ 00:06:27.939 START TEST accel 00:06:27.939 ************************************ 00:06:27.939 15:58:03 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:27.939 * Looking for test storage... 00:06:27.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:27.939 15:58:03 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:27.939 15:58:03 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:27.939 15:58:03 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:27.939 15:58:03 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2084001 00:06:27.939 15:58:03 accel -- accel/accel.sh@63 -- # waitforlisten 2084001 00:06:27.939 15:58:03 accel -- common/autotest_common.sh@829 -- # '[' -z 2084001 ']' 00:06:27.939 15:58:03 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.939 15:58:03 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.939 15:58:03 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:27.939 15:58:03 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.939 15:58:03 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:27.939 15:58:03 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.939 15:58:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.939 15:58:03 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.939 15:58:03 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.939 15:58:03 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.939 15:58:03 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.939 15:58:03 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.939 15:58:03 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:27.939 15:58:03 accel -- accel/accel.sh@41 -- # jq -r . 00:06:27.939 [2024-07-15 15:58:03.779349] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:27.939 [2024-07-15 15:58:03.779413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084001 ] 00:06:28.199 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.199 [2024-07-15 15:58:03.845253] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.199 [2024-07-15 15:58:03.918675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.770 15:58:04 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.770 15:58:04 accel -- common/autotest_common.sh@862 -- # return 0 00:06:28.770 15:58:04 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:28.770 15:58:04 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:28.770 15:58:04 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:28.770 15:58:04 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:28.770 15:58:04 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:28.770 15:58:04 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:28.770 15:58:04 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:28.770 15:58:04 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.770 15:58:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.770 15:58:04 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.770 15:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.770 15:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.770 15:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.770 15:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.770 15:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.770 15:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.770 15:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.770 15:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.770 15:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.770 15:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.770 15:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.770 15:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.770 15:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.770 15:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.770 15:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.770 15:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.770 15:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.770 15:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.770 15:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.770 15:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.770 15:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.770 15:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.770 15:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.770 15:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.770 15:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.770 15:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.770 15:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.770 15:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.770 15:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.770 15:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.770 15:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.770 15:58:04 accel -- accel/accel.sh@75 -- # killprocess 2084001 00:06:28.770 15:58:04 accel -- common/autotest_common.sh@948 -- # '[' -z 2084001 ']' 00:06:28.770 15:58:04 accel -- common/autotest_common.sh@952 -- # kill -0 2084001 00:06:28.770 15:58:04 accel -- common/autotest_common.sh@953 -- # uname 00:06:29.031 15:58:04 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.031 15:58:04 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2084001 00:06:29.031 15:58:04 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.031 15:58:04 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.031 15:58:04 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2084001' 00:06:29.031 killing process with pid 2084001 00:06:29.031 15:58:04 accel -- common/autotest_common.sh@967 -- # kill 2084001 00:06:29.031 15:58:04 accel -- common/autotest_common.sh@972 -- # wait 2084001 00:06:29.031 15:58:04 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:29.031 15:58:04 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:29.031 15:58:04 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:29.031 15:58:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.292 15:58:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.292 15:58:04 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:29.292 15:58:04 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:29.292 15:58:04 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:29.292 15:58:04 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.292 15:58:04 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.292 15:58:04 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.292 15:58:04 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.292 15:58:04 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.292 15:58:04 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:29.292 15:58:04 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:29.292 15:58:04 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.292 15:58:04 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:29.292 15:58:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.292 15:58:04 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:29.292 15:58:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:29.292 15:58:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.292 15:58:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.292 ************************************ 00:06:29.292 START TEST accel_missing_filename 00:06:29.292 ************************************ 00:06:29.292 15:58:05 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:29.292 15:58:05 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:29.292 15:58:05 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:29.292 15:58:05 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:29.292 15:58:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.292 15:58:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:29.292 15:58:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.292 15:58:05 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:29.292 15:58:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:29.292 15:58:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:29.292 15:58:05 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.292 15:58:05 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.292 15:58:05 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.292 15:58:05 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.292 15:58:05 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.292 15:58:05 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:29.292 15:58:05 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:29.292 [2024-07-15 15:58:05.041046] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:29.292 [2024-07-15 15:58:05.041111] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084375 ] 00:06:29.292 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.292 [2024-07-15 15:58:05.103597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.553 [2024-07-15 15:58:05.170771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.553 [2024-07-15 15:58:05.202506] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.553 [2024-07-15 15:58:05.239175] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:29.553 A filename is required. 00:06:29.553 15:58:05 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:29.553 15:58:05 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.553 15:58:05 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:29.553 15:58:05 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:29.553 15:58:05 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:29.553 15:58:05 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.553 00:06:29.553 real 0m0.282s 00:06:29.553 user 0m0.224s 00:06:29.553 sys 0m0.097s 00:06:29.553 15:58:05 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.553 15:58:05 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:29.553 ************************************ 00:06:29.553 END TEST accel_missing_filename 00:06:29.553 ************************************ 00:06:29.553 15:58:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.553 15:58:05 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.553 15:58:05 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:29.553 15:58:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.553 15:58:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.553 ************************************ 00:06:29.553 START TEST accel_compress_verify 00:06:29.553 ************************************ 00:06:29.553 15:58:05 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.553 15:58:05 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:29.553 15:58:05 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.553 15:58:05 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:29.553 15:58:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.553 15:58:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:29.553 15:58:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.553 15:58:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.553 15:58:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.553 15:58:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:29.553 15:58:05 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.553 15:58:05 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.553 15:58:05 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.553 15:58:05 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.553 15:58:05 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.553 15:58:05 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:29.553 15:58:05 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:29.813 [2024-07-15 15:58:05.399070] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:29.813 [2024-07-15 15:58:05.399168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084396 ] 00:06:29.813 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.813 [2024-07-15 15:58:05.461393] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.813 [2024-07-15 15:58:05.527601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.813 [2024-07-15 15:58:05.559357] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.813 [2024-07-15 15:58:05.596077] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:29.813 00:06:29.813 Compression does not support the verify option, aborting. 00:06:29.813 15:58:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:29.813 15:58:05 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.813 15:58:05 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:29.813 15:58:05 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:29.813 15:58:05 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:29.813 15:58:05 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.813 00:06:29.813 real 0m0.281s 00:06:29.813 user 0m0.211s 00:06:29.813 sys 0m0.110s 00:06:29.813 15:58:05 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.813 15:58:05 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:29.813 ************************************ 00:06:29.813 END TEST accel_compress_verify 00:06:29.813 ************************************ 00:06:30.075 15:58:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.075 15:58:05 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:30.075 15:58:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:30.075 15:58:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.075 15:58:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.075 ************************************ 00:06:30.075 START TEST accel_wrong_workload 00:06:30.075 ************************************ 00:06:30.075 15:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:30.075 15:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:30.075 15:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:30.075 15:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:30.075 15:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.075 15:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:30.075 15:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.075 15:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:30.075 15:58:05 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:30.075 15:58:05 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:30.075 15:58:05 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.075 15:58:05 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.075 15:58:05 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.075 15:58:05 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.075 15:58:05 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.075 15:58:05 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:30.075 15:58:05 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:30.075 Unsupported workload type: foobar 00:06:30.075 [2024-07-15 15:58:05.755408] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:30.075 accel_perf options: 00:06:30.075 [-h help message] 00:06:30.075 [-q queue depth per core] 00:06:30.075 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:30.075 [-T number of threads per core 00:06:30.075 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:30.075 [-t time in seconds] 00:06:30.076 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:30.076 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:30.076 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:30.076 [-l for compress/decompress workloads, name of uncompressed input file 00:06:30.076 [-S for crc32c workload, use this seed value (default 0) 00:06:30.076 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:30.076 [-f for fill workload, use this BYTE value (default 255) 00:06:30.076 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:30.076 [-y verify result if this switch is on] 00:06:30.076 [-a tasks to allocate per core (default: same value as -q)] 00:06:30.076 Can be used to spread operations across a wider range of memory. 00:06:30.076 15:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:30.076 15:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.076 15:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.076 15:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.076 00:06:30.076 real 0m0.037s 00:06:30.076 user 0m0.023s 00:06:30.076 sys 0m0.014s 00:06:30.076 15:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.076 15:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:30.076 ************************************ 00:06:30.076 END TEST accel_wrong_workload 00:06:30.076 ************************************ 00:06:30.076 Error: writing output failed: Broken pipe 00:06:30.076 15:58:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.076 15:58:05 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:30.076 15:58:05 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:30.076 15:58:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.076 15:58:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.076 ************************************ 00:06:30.076 START TEST accel_negative_buffers 00:06:30.076 ************************************ 00:06:30.076 15:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:30.076 15:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:30.076 15:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:30.076 15:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:30.076 15:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.076 15:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:30.076 15:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.076 15:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:30.076 15:58:05 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:30.076 15:58:05 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:30.076 15:58:05 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.076 15:58:05 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.076 15:58:05 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.076 15:58:05 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.076 15:58:05 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.076 15:58:05 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:30.076 15:58:05 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:30.076 -x option must be non-negative. 00:06:30.076 [2024-07-15 15:58:05.869172] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:30.076 accel_perf options: 00:06:30.076 [-h help message] 00:06:30.076 [-q queue depth per core] 00:06:30.076 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:30.076 [-T number of threads per core 00:06:30.076 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:30.076 [-t time in seconds] 00:06:30.076 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:30.076 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:30.076 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:30.076 [-l for compress/decompress workloads, name of uncompressed input file 00:06:30.076 [-S for crc32c workload, use this seed value (default 0) 00:06:30.076 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:30.076 [-f for fill workload, use this BYTE value (default 255) 00:06:30.076 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:30.076 [-y verify result if this switch is on] 00:06:30.076 [-a tasks to allocate per core (default: same value as -q)] 00:06:30.076 Can be used to spread operations across a wider range of memory. 00:06:30.076 15:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:30.076 15:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.076 15:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.076 15:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.076 00:06:30.076 real 0m0.037s 00:06:30.076 user 0m0.023s 00:06:30.076 sys 0m0.014s 00:06:30.076 15:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.076 15:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:30.076 ************************************ 00:06:30.076 END TEST accel_negative_buffers 00:06:30.076 ************************************ 00:06:30.076 Error: writing output failed: Broken pipe 00:06:30.076 15:58:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.076 15:58:05 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:30.076 15:58:05 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:30.076 15:58:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.076 15:58:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.337 ************************************ 00:06:30.337 START TEST accel_crc32c 00:06:30.337 ************************************ 00:06:30.337 15:58:05 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:30.337 15:58:05 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:30.337 15:58:05 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:30.337 15:58:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.337 15:58:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.337 15:58:05 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:30.337 15:58:05 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:30.337 15:58:05 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:30.337 15:58:05 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.337 15:58:05 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.337 15:58:05 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.337 15:58:05 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.337 15:58:05 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.337 15:58:05 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:30.337 15:58:05 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:30.337 [2024-07-15 15:58:05.981482] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:30.337 [2024-07-15 15:58:05.981575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084510 ] 00:06:30.337 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.337 [2024-07-15 15:58:06.048740] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.337 [2024-07-15 15:58:06.123980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.337 15:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.337 15:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.337 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.337 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.337 15:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.337 15:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.337 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.337 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.337 15:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.338 15:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:31.723 15:58:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.723 00:06:31.723 real 0m1.301s 00:06:31.723 user 0m1.203s 00:06:31.723 sys 0m0.109s 00:06:31.723 15:58:07 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.723 15:58:07 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:31.723 ************************************ 00:06:31.723 END TEST accel_crc32c 00:06:31.723 ************************************ 00:06:31.723 15:58:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.723 15:58:07 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:31.723 15:58:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:31.723 15:58:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.723 15:58:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.723 ************************************ 00:06:31.723 START TEST accel_crc32c_C2 00:06:31.723 ************************************ 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:31.723 [2024-07-15 15:58:07.357535] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:31.723 [2024-07-15 15:58:07.357598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084819 ] 00:06:31.723 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.723 [2024-07-15 15:58:07.417168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.723 [2024-07-15 15:58:07.480622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.723 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.724 15:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.110 00:06:33.110 real 0m1.280s 00:06:33.110 user 0m1.191s 00:06:33.110 sys 0m0.100s 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.110 15:58:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:33.110 ************************************ 00:06:33.110 END TEST accel_crc32c_C2 00:06:33.111 ************************************ 00:06:33.111 15:58:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.111 15:58:08 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:33.111 15:58:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:33.111 15:58:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.111 15:58:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.111 ************************************ 00:06:33.111 START TEST accel_copy 00:06:33.111 ************************************ 00:06:33.111 15:58:08 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:33.111 [2024-07-15 15:58:08.714616] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:33.111 [2024-07-15 15:58:08.714682] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085166 ] 00:06:33.111 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.111 [2024-07-15 15:58:08.774482] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.111 [2024-07-15 15:58:08.839472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.111 15:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:34.496 15:58:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.496 00:06:34.496 real 0m1.281s 00:06:34.496 user 0m1.192s 00:06:34.496 sys 0m0.100s 00:06:34.496 15:58:09 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.496 15:58:09 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:34.496 ************************************ 00:06:34.496 END TEST accel_copy 00:06:34.496 ************************************ 00:06:34.496 15:58:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.496 15:58:10 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.496 15:58:10 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:34.496 15:58:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.496 15:58:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.496 ************************************ 00:06:34.496 START TEST accel_fill 00:06:34.496 ************************************ 00:06:34.496 15:58:10 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:34.496 [2024-07-15 15:58:10.073245] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:34.496 [2024-07-15 15:58:10.073342] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085521 ] 00:06:34.496 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.496 [2024-07-15 15:58:10.134908] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.496 [2024-07-15 15:58:10.199174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.496 15:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.497 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.497 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.497 15:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:34.497 15:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.497 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.497 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.497 15:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.497 15:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.497 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.497 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.497 15:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.497 15:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.497 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.497 15:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:35.882 15:58:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.882 00:06:35.882 real 0m1.286s 00:06:35.882 user 0m1.186s 00:06:35.882 sys 0m0.112s 00:06:35.882 15:58:11 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.882 15:58:11 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:35.882 ************************************ 00:06:35.882 END TEST accel_fill 00:06:35.882 ************************************ 00:06:35.882 15:58:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.882 15:58:11 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:35.882 15:58:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:35.882 15:58:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.882 15:58:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.882 ************************************ 00:06:35.882 START TEST accel_copy_crc32c 00:06:35.882 ************************************ 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:35.882 [2024-07-15 15:58:11.433595] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:35.882 [2024-07-15 15:58:11.433659] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085815 ] 00:06:35.882 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.882 [2024-07-15 15:58:11.493967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.882 [2024-07-15 15:58:11.559052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.882 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.883 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.883 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.883 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.883 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.883 15:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.267 00:06:37.267 real 0m1.282s 00:06:37.267 user 0m1.189s 00:06:37.267 sys 0m0.105s 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.267 15:58:12 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:37.267 ************************************ 00:06:37.267 END TEST accel_copy_crc32c 00:06:37.267 ************************************ 00:06:37.267 15:58:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.267 15:58:12 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:37.267 15:58:12 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:37.267 15:58:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.267 15:58:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.268 ************************************ 00:06:37.268 START TEST accel_copy_crc32c_C2 00:06:37.268 ************************************ 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:37.268 [2024-07-15 15:58:12.794516] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:37.268 [2024-07-15 15:58:12.794611] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086004 ] 00:06:37.268 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.268 [2024-07-15 15:58:12.857780] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.268 [2024-07-15 15:58:12.928819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.268 15:58:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.652 00:06:38.652 real 0m1.293s 00:06:38.652 user 0m1.206s 00:06:38.652 sys 0m0.100s 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.652 15:58:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:38.652 ************************************ 00:06:38.652 END TEST accel_copy_crc32c_C2 00:06:38.653 ************************************ 00:06:38.653 15:58:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.653 15:58:14 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:38.653 15:58:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:38.653 15:58:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.653 15:58:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.653 ************************************ 00:06:38.653 START TEST accel_dualcast 00:06:38.653 ************************************ 00:06:38.653 15:58:14 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:38.653 [2024-07-15 15:58:14.162944] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:38.653 [2024-07-15 15:58:14.163045] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086260 ] 00:06:38.653 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.653 [2024-07-15 15:58:14.226586] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.653 [2024-07-15 15:58:14.292812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.653 15:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.596 15:58:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.596 15:58:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.596 15:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.596 15:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.596 15:58:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.596 15:58:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.596 15:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.596 15:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.596 15:58:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.596 15:58:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.597 15:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.597 15:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.597 15:58:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.597 15:58:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.597 15:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.597 15:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.597 15:58:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.597 15:58:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.597 15:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.597 15:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.597 15:58:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.597 15:58:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.597 15:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.597 15:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.597 15:58:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.597 15:58:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:39.597 15:58:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.597 00:06:39.597 real 0m1.287s 00:06:39.597 user 0m1.198s 00:06:39.597 sys 0m0.100s 00:06:39.597 15:58:15 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.597 15:58:15 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:39.597 ************************************ 00:06:39.597 END TEST accel_dualcast 00:06:39.597 ************************************ 00:06:39.858 15:58:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.858 15:58:15 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:39.858 15:58:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:39.858 15:58:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.858 15:58:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.858 ************************************ 00:06:39.858 START TEST accel_compare 00:06:39.858 ************************************ 00:06:39.858 15:58:15 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:39.858 [2024-07-15 15:58:15.526410] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:39.858 [2024-07-15 15:58:15.526475] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086609 ] 00:06:39.858 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.858 [2024-07-15 15:58:15.587848] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.858 [2024-07-15 15:58:15.655188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.858 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.119 15:58:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.119 15:58:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.119 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.119 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.119 15:58:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.119 15:58:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.119 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.119 15:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:41.062 15:58:16 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.062 00:06:41.062 real 0m1.285s 00:06:41.062 user 0m1.193s 00:06:41.062 sys 0m0.104s 00:06:41.062 15:58:16 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.063 15:58:16 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:41.063 ************************************ 00:06:41.063 END TEST accel_compare 00:06:41.063 ************************************ 00:06:41.063 15:58:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.063 15:58:16 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:41.063 15:58:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:41.063 15:58:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.063 15:58:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.063 ************************************ 00:06:41.063 START TEST accel_xor 00:06:41.063 ************************************ 00:06:41.063 15:58:16 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:41.063 15:58:16 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:41.063 15:58:16 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:41.063 15:58:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.063 15:58:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.063 15:58:16 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:41.063 15:58:16 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:41.063 15:58:16 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:41.063 15:58:16 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.063 15:58:16 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.063 15:58:16 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.063 15:58:16 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.063 15:58:16 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.063 15:58:16 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:41.063 15:58:16 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:41.063 [2024-07-15 15:58:16.887831] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:41.063 [2024-07-15 15:58:16.887898] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086959 ] 00:06:41.323 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.323 [2024-07-15 15:58:16.958750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.323 [2024-07-15 15:58:17.029442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.323 15:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.323 15:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.323 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.323 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.323 15:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.323 15:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.323 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.323 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.323 15:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:41.323 15:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.323 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.323 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.323 15:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.323 15:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.324 15:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.709 00:06:42.709 real 0m1.298s 00:06:42.709 user 0m1.198s 00:06:42.709 sys 0m0.112s 00:06:42.709 15:58:18 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.709 15:58:18 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:42.709 ************************************ 00:06:42.709 END TEST accel_xor 00:06:42.709 ************************************ 00:06:42.709 15:58:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.709 15:58:18 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:42.709 15:58:18 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:42.709 15:58:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.709 15:58:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.709 ************************************ 00:06:42.709 START TEST accel_xor 00:06:42.709 ************************************ 00:06:42.709 15:58:18 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:42.709 [2024-07-15 15:58:18.261452] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:42.709 [2024-07-15 15:58:18.261544] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087312 ] 00:06:42.709 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.709 [2024-07-15 15:58:18.322307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.709 [2024-07-15 15:58:18.386663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.709 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.710 15:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:44.093 15:58:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.093 00:06:44.093 real 0m1.284s 00:06:44.093 user 0m1.202s 00:06:44.093 sys 0m0.094s 00:06:44.093 15:58:19 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.093 15:58:19 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:44.093 ************************************ 00:06:44.093 END TEST accel_xor 00:06:44.093 ************************************ 00:06:44.093 15:58:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.093 15:58:19 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:44.093 15:58:19 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:44.093 15:58:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.093 15:58:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.093 ************************************ 00:06:44.093 START TEST accel_dif_verify 00:06:44.093 ************************************ 00:06:44.093 15:58:19 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:44.093 [2024-07-15 15:58:19.622058] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:44.093 [2024-07-15 15:58:19.622136] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087511 ] 00:06:44.093 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.093 [2024-07-15 15:58:19.684204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.093 [2024-07-15 15:58:19.753887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.093 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 15:58:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:45.478 15:58:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.478 00:06:45.478 real 0m1.290s 00:06:45.478 user 0m1.197s 00:06:45.478 sys 0m0.105s 00:06:45.478 15:58:20 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.478 15:58:20 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:45.478 ************************************ 00:06:45.478 END TEST accel_dif_verify 00:06:45.478 ************************************ 00:06:45.478 15:58:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.478 15:58:20 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:45.478 15:58:20 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:45.478 15:58:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.478 15:58:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.478 ************************************ 00:06:45.478 START TEST accel_dif_generate 00:06:45.478 ************************************ 00:06:45.478 15:58:20 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:45.478 15:58:20 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:45.478 15:58:20 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:45.478 15:58:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.478 15:58:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.478 15:58:20 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:45.478 15:58:20 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:45.478 15:58:20 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:45.478 15:58:20 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.478 15:58:20 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.478 15:58:20 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.478 15:58:20 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.478 15:58:20 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.478 15:58:20 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:45.478 15:58:20 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:45.478 [2024-07-15 15:58:20.991237] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:45.478 [2024-07-15 15:58:20.991362] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087712 ] 00:06:45.478 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.478 [2024-07-15 15:58:21.062444] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.478 [2024-07-15 15:58:21.128724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.478 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.479 15:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:46.422 15:58:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.422 00:06:46.422 real 0m1.297s 00:06:46.422 user 0m1.202s 00:06:46.422 sys 0m0.107s 00:06:46.422 15:58:22 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.422 15:58:22 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:46.422 ************************************ 00:06:46.422 END TEST accel_dif_generate 00:06:46.422 ************************************ 00:06:46.682 15:58:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.682 15:58:22 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:46.682 15:58:22 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:46.682 15:58:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.682 15:58:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.682 ************************************ 00:06:46.682 START TEST accel_dif_generate_copy 00:06:46.682 ************************************ 00:06:46.682 15:58:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:46.682 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:46.682 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:46.682 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.682 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.682 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:46.682 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:46.682 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:46.682 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.682 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.682 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.682 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.683 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.683 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:46.683 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:46.683 [2024-07-15 15:58:22.362552] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:46.683 [2024-07-15 15:58:22.362647] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088050 ] 00:06:46.683 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.683 [2024-07-15 15:58:22.425224] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.683 [2024-07-15 15:58:22.494244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.969 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.970 15:58:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.987 00:06:47.987 real 0m1.290s 00:06:47.987 user 0m1.201s 00:06:47.987 sys 0m0.100s 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.987 15:58:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.987 ************************************ 00:06:47.987 END TEST accel_dif_generate_copy 00:06:47.987 ************************************ 00:06:47.987 15:58:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.987 15:58:23 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:47.987 15:58:23 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.987 15:58:23 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:47.987 15:58:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.987 15:58:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.987 ************************************ 00:06:47.987 START TEST accel_comp 00:06:47.987 ************************************ 00:06:47.987 15:58:23 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.987 15:58:23 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:47.987 15:58:23 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:47.987 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.987 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.987 15:58:23 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.987 15:58:23 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.987 15:58:23 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:47.987 15:58:23 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.987 15:58:23 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.987 15:58:23 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.987 15:58:23 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.987 15:58:23 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.987 15:58:23 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:47.987 15:58:23 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:47.987 [2024-07-15 15:58:23.730041] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:47.987 [2024-07-15 15:58:23.730146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088401 ] 00:06:47.987 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.987 [2024-07-15 15:58:23.792513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.247 [2024-07-15 15:58:23.862630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.247 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.248 15:58:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:49.187 15:58:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.187 00:06:49.187 real 0m1.294s 00:06:49.187 user 0m1.208s 00:06:49.187 sys 0m0.098s 00:06:49.187 15:58:24 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.187 15:58:24 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:49.187 ************************************ 00:06:49.187 END TEST accel_comp 00:06:49.187 ************************************ 00:06:49.447 15:58:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.447 15:58:25 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:49.447 15:58:25 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:49.447 15:58:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.447 15:58:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.447 ************************************ 00:06:49.447 START TEST accel_decomp 00:06:49.447 ************************************ 00:06:49.447 15:58:25 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:49.447 [2024-07-15 15:58:25.097258] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:49.447 [2024-07-15 15:58:25.097350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088749 ] 00:06:49.447 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.447 [2024-07-15 15:58:25.157760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.447 [2024-07-15 15:58:25.223019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.447 15:58:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.448 15:58:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.448 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.448 15:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:50.829 15:58:26 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.829 00:06:50.829 real 0m1.285s 00:06:50.829 user 0m1.204s 00:06:50.829 sys 0m0.094s 00:06:50.829 15:58:26 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.829 15:58:26 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:50.829 ************************************ 00:06:50.830 END TEST accel_decomp 00:06:50.830 ************************************ 00:06:50.830 15:58:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.830 15:58:26 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.830 15:58:26 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:50.830 15:58:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.830 15:58:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.830 ************************************ 00:06:50.830 START TEST accel_decomp_full 00:06:50.830 ************************************ 00:06:50.830 15:58:26 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:50.830 [2024-07-15 15:58:26.457067] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:50.830 [2024-07-15 15:58:26.457140] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089026 ] 00:06:50.830 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.830 [2024-07-15 15:58:26.518848] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.830 [2024-07-15 15:58:26.587961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.830 15:58:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.216 15:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.216 15:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.216 15:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.216 15:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:52.217 15:58:27 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.217 00:06:52.217 real 0m1.305s 00:06:52.217 user 0m1.212s 00:06:52.217 sys 0m0.106s 00:06:52.217 15:58:27 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.217 15:58:27 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:52.217 ************************************ 00:06:52.217 END TEST accel_decomp_full 00:06:52.217 ************************************ 00:06:52.217 15:58:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.217 15:58:27 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.217 15:58:27 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:52.217 15:58:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.217 15:58:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.217 ************************************ 00:06:52.217 START TEST accel_decomp_mcore 00:06:52.217 ************************************ 00:06:52.217 15:58:27 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.217 15:58:27 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:52.217 15:58:27 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:52.217 15:58:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:27 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.217 15:58:27 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.217 15:58:27 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:52.217 15:58:27 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.217 15:58:27 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.217 15:58:27 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.217 15:58:27 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.217 15:58:27 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.217 15:58:27 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:52.217 15:58:27 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:52.217 [2024-07-15 15:58:27.840414] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:52.217 [2024-07-15 15:58:27.840512] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089223 ] 00:06:52.217 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.217 [2024-07-15 15:58:27.915937] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.217 [2024-07-15 15:58:27.991866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.217 [2024-07-15 15:58:27.991983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.217 [2024-07-15 15:58:27.992158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.217 [2024-07-15 15:58:27.992158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.217 15:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.603 00:06:53.603 real 0m1.321s 00:06:53.603 user 0m4.445s 00:06:53.603 sys 0m0.123s 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.603 15:58:29 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:53.603 ************************************ 00:06:53.603 END TEST accel_decomp_mcore 00:06:53.603 ************************************ 00:06:53.603 15:58:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:53.603 15:58:29 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:53.603 15:58:29 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:53.603 15:58:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.603 15:58:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.603 ************************************ 00:06:53.603 START TEST accel_decomp_full_mcore 00:06:53.603 ************************************ 00:06:53.603 15:58:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:53.604 [2024-07-15 15:58:29.235388] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:53.604 [2024-07-15 15:58:29.235469] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089493 ] 00:06:53.604 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.604 [2024-07-15 15:58:29.299342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.604 [2024-07-15 15:58:29.371281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.604 [2024-07-15 15:58:29.371396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.604 [2024-07-15 15:58:29.371554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.604 [2024-07-15 15:58:29.371554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.604 15:58:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.990 00:06:54.990 real 0m1.313s 00:06:54.990 user 0m4.475s 00:06:54.990 sys 0m0.113s 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.990 15:58:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:54.990 ************************************ 00:06:54.990 END TEST accel_decomp_full_mcore 00:06:54.990 ************************************ 00:06:54.990 15:58:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.990 15:58:30 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:54.990 15:58:30 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:54.990 15:58:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.990 15:58:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.990 ************************************ 00:06:54.990 START TEST accel_decomp_mthread 00:06:54.990 ************************************ 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:54.990 [2024-07-15 15:58:30.624272] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:54.990 [2024-07-15 15:58:30.624335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089849 ] 00:06:54.990 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.990 [2024-07-15 15:58:30.685229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.990 [2024-07-15 15:58:30.751067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.990 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.991 15:58:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.376 00:06:56.376 real 0m1.291s 00:06:56.376 user 0m1.200s 00:06:56.376 sys 0m0.104s 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.376 15:58:31 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:56.376 ************************************ 00:06:56.376 END TEST accel_decomp_mthread 00:06:56.376 ************************************ 00:06:56.376 15:58:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.376 15:58:31 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.376 15:58:31 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:56.376 15:58:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.376 15:58:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.376 ************************************ 00:06:56.376 START TEST accel_decomp_full_mthread 00:06:56.376 ************************************ 00:06:56.376 15:58:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.376 15:58:31 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:56.376 15:58:31 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:56.376 15:58:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.376 15:58:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.376 15:58:31 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.376 15:58:31 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.376 15:58:31 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:56.376 15:58:31 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.376 15:58:31 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.376 15:58:31 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.376 15:58:31 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.376 15:58:31 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.376 15:58:31 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:56.376 15:58:31 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:56.376 [2024-07-15 15:58:31.990009] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:56.376 [2024-07-15 15:58:31.990102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2090201 ] 00:06:56.376 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.376 [2024-07-15 15:58:32.053008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.376 [2024-07-15 15:58:32.120244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.376 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.377 15:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.762 00:06:57.762 real 0m1.326s 00:06:57.762 user 0m1.237s 00:06:57.762 sys 0m0.101s 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.762 15:58:33 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:57.762 ************************************ 00:06:57.762 END TEST accel_decomp_full_mthread 00:06:57.762 ************************************ 00:06:57.762 15:58:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.762 15:58:33 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:57.762 15:58:33 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:57.762 15:58:33 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:57.762 15:58:33 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:57.762 15:58:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.762 15:58:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.762 15:58:33 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.762 15:58:33 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.762 15:58:33 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.762 15:58:33 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.762 15:58:33 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.762 15:58:33 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:57.762 15:58:33 accel -- accel/accel.sh@41 -- # jq -r . 00:06:57.762 ************************************ 00:06:57.762 START TEST accel_dif_functional_tests 00:06:57.762 ************************************ 00:06:57.762 15:58:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:57.762 [2024-07-15 15:58:33.409273] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:57.762 [2024-07-15 15:58:33.409328] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2090549 ] 00:06:57.762 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.762 [2024-07-15 15:58:33.469564] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.762 [2024-07-15 15:58:33.539832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.762 [2024-07-15 15:58:33.539950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.762 [2024-07-15 15:58:33.539953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.762 00:06:57.762 00:06:57.762 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.762 http://cunit.sourceforge.net/ 00:06:57.762 00:06:57.762 00:06:57.762 Suite: accel_dif 00:06:57.762 Test: verify: DIF generated, GUARD check ...passed 00:06:57.762 Test: verify: DIF generated, APPTAG check ...passed 00:06:57.762 Test: verify: DIF generated, REFTAG check ...passed 00:06:57.762 Test: verify: DIF not generated, GUARD check ...[2024-07-15 15:58:33.595551] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:57.762 passed 00:06:57.762 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 15:58:33.595595] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:57.762 passed 00:06:57.762 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 15:58:33.595616] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:57.762 passed 00:06:57.763 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:57.763 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 15:58:33.595665] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:57.763 passed 00:06:57.763 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:57.763 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:57.763 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:57.763 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 15:58:33.595778] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:57.763 passed 00:06:57.763 Test: verify copy: DIF generated, GUARD check ...passed 00:06:57.763 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:57.763 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:57.763 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 15:58:33.595898] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:57.763 passed 00:06:57.763 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 15:58:33.595921] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:57.763 passed 00:06:57.763 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 15:58:33.595944] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:57.763 passed 00:06:57.763 Test: generate copy: DIF generated, GUARD check ...passed 00:06:57.763 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:57.763 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:57.763 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:57.763 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:57.763 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:57.763 Test: generate copy: iovecs-len validate ...[2024-07-15 15:58:33.596130] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:57.763 passed 00:06:57.763 Test: generate copy: buffer alignment validate ...passed 00:06:57.763 00:06:57.763 Run Summary: Type Total Ran Passed Failed Inactive 00:06:57.763 suites 1 1 n/a 0 0 00:06:57.763 tests 26 26 26 0 0 00:06:57.763 asserts 115 115 115 0 n/a 00:06:57.763 00:06:57.763 Elapsed time = 0.002 seconds 00:06:58.024 00:06:58.024 real 0m0.352s 00:06:58.024 user 0m0.492s 00:06:58.024 sys 0m0.127s 00:06:58.024 15:58:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.024 15:58:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:58.024 ************************************ 00:06:58.024 END TEST accel_dif_functional_tests 00:06:58.024 ************************************ 00:06:58.024 15:58:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.024 00:06:58.024 real 0m30.136s 00:06:58.024 user 0m33.702s 00:06:58.024 sys 0m4.179s 00:06:58.024 15:58:33 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.024 15:58:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.024 ************************************ 00:06:58.024 END TEST accel 00:06:58.024 ************************************ 00:06:58.024 15:58:33 -- common/autotest_common.sh@1142 -- # return 0 00:06:58.024 15:58:33 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:58.024 15:58:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.024 15:58:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.024 15:58:33 -- common/autotest_common.sh@10 -- # set +x 00:06:58.024 ************************************ 00:06:58.024 START TEST accel_rpc 00:06:58.024 ************************************ 00:06:58.024 15:58:33 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:58.285 * Looking for test storage... 00:06:58.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:58.285 15:58:33 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:58.285 15:58:33 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2090618 00:06:58.285 15:58:33 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2090618 00:06:58.285 15:58:33 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:58.285 15:58:33 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2090618 ']' 00:06:58.285 15:58:33 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.285 15:58:33 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.285 15:58:33 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.285 15:58:33 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.285 15:58:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.285 [2024-07-15 15:58:33.981907] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:58.285 [2024-07-15 15:58:33.981961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2090618 ] 00:06:58.285 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.285 [2024-07-15 15:58:34.041970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.285 [2024-07-15 15:58:34.109256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.228 15:58:34 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.228 15:58:34 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:59.228 15:58:34 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:59.228 15:58:34 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:59.228 15:58:34 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:59.228 15:58:34 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:59.228 15:58:34 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:59.228 15:58:34 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.228 15:58:34 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.228 15:58:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.228 ************************************ 00:06:59.228 START TEST accel_assign_opcode 00:06:59.228 ************************************ 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:59.228 [2024-07-15 15:58:34.771170] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:59.228 [2024-07-15 15:58:34.783196] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.228 software 00:06:59.228 00:06:59.228 real 0m0.211s 00:06:59.228 user 0m0.051s 00:06:59.228 sys 0m0.010s 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.228 15:58:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:59.228 ************************************ 00:06:59.228 END TEST accel_assign_opcode 00:06:59.228 ************************************ 00:06:59.228 15:58:35 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:59.228 15:58:35 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2090618 00:06:59.228 15:58:35 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2090618 ']' 00:06:59.228 15:58:35 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2090618 00:06:59.228 15:58:35 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:59.228 15:58:35 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.228 15:58:35 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2090618 00:06:59.228 15:58:35 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.228 15:58:35 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.228 15:58:35 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2090618' 00:06:59.228 killing process with pid 2090618 00:06:59.228 15:58:35 accel_rpc -- common/autotest_common.sh@967 -- # kill 2090618 00:06:59.228 15:58:35 accel_rpc -- common/autotest_common.sh@972 -- # wait 2090618 00:06:59.490 00:06:59.490 real 0m1.446s 00:06:59.490 user 0m1.520s 00:06:59.490 sys 0m0.394s 00:06:59.490 15:58:35 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.490 15:58:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.490 ************************************ 00:06:59.490 END TEST accel_rpc 00:06:59.490 ************************************ 00:06:59.490 15:58:35 -- common/autotest_common.sh@1142 -- # return 0 00:06:59.490 15:58:35 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:59.490 15:58:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.490 15:58:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.490 15:58:35 -- common/autotest_common.sh@10 -- # set +x 00:06:59.751 ************************************ 00:06:59.751 START TEST app_cmdline 00:06:59.751 ************************************ 00:06:59.751 15:58:35 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:59.751 * Looking for test storage... 00:06:59.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:59.751 15:58:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:59.751 15:58:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2091025 00:06:59.751 15:58:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2091025 00:06:59.751 15:58:35 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2091025 ']' 00:06:59.751 15:58:35 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.751 15:58:35 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.751 15:58:35 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.751 15:58:35 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.751 15:58:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.751 15:58:35 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:59.751 [2024-07-15 15:58:35.501446] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:06:59.751 [2024-07-15 15:58:35.501507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2091025 ] 00:06:59.751 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.751 [2024-07-15 15:58:35.561598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.011 [2024-07-15 15:58:35.629452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.583 15:58:36 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.583 15:58:36 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:00.583 15:58:36 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:00.583 { 00:07:00.583 "version": "SPDK v24.09-pre git sha1 97f71d59d", 00:07:00.583 "fields": { 00:07:00.583 "major": 24, 00:07:00.583 "minor": 9, 00:07:00.583 "patch": 0, 00:07:00.583 "suffix": "-pre", 00:07:00.583 "commit": "97f71d59d" 00:07:00.583 } 00:07:00.583 } 00:07:00.583 15:58:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:00.583 15:58:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:00.583 15:58:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:00.583 15:58:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:00.583 15:58:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:00.583 15:58:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:00.583 15:58:36 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.583 15:58:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:00.583 15:58:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.583 15:58:36 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.844 15:58:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:00.844 15:58:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:00.844 15:58:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.844 15:58:36 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:00.844 15:58:36 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.844 15:58:36 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.844 15:58:36 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.844 15:58:36 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.844 15:58:36 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.844 15:58:36 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.845 request: 00:07:00.845 { 00:07:00.845 "method": "env_dpdk_get_mem_stats", 00:07:00.845 "req_id": 1 00:07:00.845 } 00:07:00.845 Got JSON-RPC error response 00:07:00.845 response: 00:07:00.845 { 00:07:00.845 "code": -32601, 00:07:00.845 "message": "Method not found" 00:07:00.845 } 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.845 15:58:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2091025 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2091025 ']' 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2091025 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2091025 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2091025' 00:07:00.845 killing process with pid 2091025 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@967 -- # kill 2091025 00:07:00.845 15:58:36 app_cmdline -- common/autotest_common.sh@972 -- # wait 2091025 00:07:01.106 00:07:01.106 real 0m1.525s 00:07:01.106 user 0m1.802s 00:07:01.106 sys 0m0.406s 00:07:01.106 15:58:36 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.106 15:58:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.106 ************************************ 00:07:01.106 END TEST app_cmdline 00:07:01.106 ************************************ 00:07:01.106 15:58:36 -- common/autotest_common.sh@1142 -- # return 0 00:07:01.106 15:58:36 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:01.106 15:58:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:01.106 15:58:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.106 15:58:36 -- common/autotest_common.sh@10 -- # set +x 00:07:01.106 ************************************ 00:07:01.106 START TEST version 00:07:01.106 ************************************ 00:07:01.106 15:58:36 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:01.368 * Looking for test storage... 00:07:01.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:01.368 15:58:37 version -- app/version.sh@17 -- # get_header_version major 00:07:01.368 15:58:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.368 15:58:37 version -- app/version.sh@14 -- # cut -f2 00:07:01.368 15:58:37 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.368 15:58:37 version -- app/version.sh@17 -- # major=24 00:07:01.368 15:58:37 version -- app/version.sh@18 -- # get_header_version minor 00:07:01.368 15:58:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.368 15:58:37 version -- app/version.sh@14 -- # cut -f2 00:07:01.368 15:58:37 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.368 15:58:37 version -- app/version.sh@18 -- # minor=9 00:07:01.368 15:58:37 version -- app/version.sh@19 -- # get_header_version patch 00:07:01.368 15:58:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.368 15:58:37 version -- app/version.sh@14 -- # cut -f2 00:07:01.368 15:58:37 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.368 15:58:37 version -- app/version.sh@19 -- # patch=0 00:07:01.368 15:58:37 version -- app/version.sh@20 -- # get_header_version suffix 00:07:01.368 15:58:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.368 15:58:37 version -- app/version.sh@14 -- # cut -f2 00:07:01.368 15:58:37 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.368 15:58:37 version -- app/version.sh@20 -- # suffix=-pre 00:07:01.368 15:58:37 version -- app/version.sh@22 -- # version=24.9 00:07:01.368 15:58:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:01.368 15:58:37 version -- app/version.sh@28 -- # version=24.9rc0 00:07:01.368 15:58:37 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:01.368 15:58:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:01.368 15:58:37 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:01.368 15:58:37 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:01.368 00:07:01.368 real 0m0.171s 00:07:01.368 user 0m0.086s 00:07:01.368 sys 0m0.121s 00:07:01.368 15:58:37 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.368 15:58:37 version -- common/autotest_common.sh@10 -- # set +x 00:07:01.368 ************************************ 00:07:01.368 END TEST version 00:07:01.368 ************************************ 00:07:01.368 15:58:37 -- common/autotest_common.sh@1142 -- # return 0 00:07:01.368 15:58:37 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:01.368 15:58:37 -- spdk/autotest.sh@198 -- # uname -s 00:07:01.368 15:58:37 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:01.368 15:58:37 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:01.368 15:58:37 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:01.368 15:58:37 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:01.368 15:58:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:01.368 15:58:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:01.368 15:58:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:01.368 15:58:37 -- common/autotest_common.sh@10 -- # set +x 00:07:01.368 15:58:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:01.368 15:58:37 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:01.368 15:58:37 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:01.368 15:58:37 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:01.368 15:58:37 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:01.368 15:58:37 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:01.368 15:58:37 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:01.368 15:58:37 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:01.368 15:58:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.368 15:58:37 -- common/autotest_common.sh@10 -- # set +x 00:07:01.629 ************************************ 00:07:01.629 START TEST nvmf_tcp 00:07:01.629 ************************************ 00:07:01.629 15:58:37 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:01.629 * Looking for test storage... 00:07:01.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.629 15:58:37 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.629 15:58:37 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.629 15:58:37 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.629 15:58:37 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.629 15:58:37 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.630 15:58:37 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.630 15:58:37 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.630 15:58:37 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:01.630 15:58:37 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.630 15:58:37 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:01.630 15:58:37 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:01.630 15:58:37 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:01.630 15:58:37 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.630 15:58:37 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.630 15:58:37 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.630 15:58:37 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:01.630 15:58:37 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:01.630 15:58:37 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:01.630 15:58:37 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:01.630 15:58:37 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:01.630 15:58:37 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:01.630 15:58:37 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:01.630 15:58:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.630 15:58:37 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:01.630 15:58:37 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:01.630 15:58:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:01.630 15:58:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.630 15:58:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.630 ************************************ 00:07:01.630 START TEST nvmf_example 00:07:01.630 ************************************ 00:07:01.630 15:58:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:01.891 * Looking for test storage... 00:07:01.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:01.891 15:58:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:08.543 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:08.543 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:08.543 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.543 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:08.544 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:08.544 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:08.804 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:08.804 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:08.804 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:08.804 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.804 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.804 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.804 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:08.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:07:08.804 00:07:08.804 --- 10.0.0.2 ping statistics --- 00:07:08.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.804 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:07:08.804 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:07:08.804 00:07:08.804 --- 10.0.0.1 ping statistics --- 00:07:08.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.804 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:07:08.804 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.804 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:08.804 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:08.804 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.804 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2095160 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2095160 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2095160 ']' 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.065 15:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.065 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:10.003 15:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:10.003 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.227 Initializing NVMe Controllers 00:07:22.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:22.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:22.227 Initialization complete. Launching workers. 00:07:22.227 ======================================================== 00:07:22.227 Latency(us) 00:07:22.227 Device Information : IOPS MiB/s Average min max 00:07:22.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17323.00 67.67 3694.20 839.67 15730.77 00:07:22.227 ======================================================== 00:07:22.227 Total : 17323.00 67.67 3694.20 839.67 15730.77 00:07:22.227 00:07:22.227 15:58:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:22.227 15:58:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:22.228 rmmod nvme_tcp 00:07:22.228 rmmod nvme_fabrics 00:07:22.228 rmmod nvme_keyring 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2095160 ']' 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2095160 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2095160 ']' 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2095160 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2095160 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2095160' 00:07:22.228 killing process with pid 2095160 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 2095160 00:07:22.228 15:58:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 2095160 00:07:22.228 nvmf threads initialize successfully 00:07:22.228 bdev subsystem init successfully 00:07:22.228 created a nvmf target service 00:07:22.228 create targets's poll groups done 00:07:22.228 all subsystems of target started 00:07:22.228 nvmf target is running 00:07:22.228 all subsystems of target stopped 00:07:22.228 destroy targets's poll groups done 00:07:22.228 destroyed the nvmf target service 00:07:22.228 bdev subsystem finish successfully 00:07:22.228 nvmf threads destroy successfully 00:07:22.228 15:58:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:22.228 15:58:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:22.228 15:58:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:22.228 15:58:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:22.228 15:58:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:22.228 15:58:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.228 15:58:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.228 15:58:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.488 15:58:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:22.488 15:58:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:22.488 15:58:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:22.488 15:58:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.488 00:07:22.488 real 0m20.811s 00:07:22.488 user 0m46.353s 00:07:22.488 sys 0m6.413s 00:07:22.488 15:58:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.488 15:58:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.488 ************************************ 00:07:22.488 END TEST nvmf_example 00:07:22.488 ************************************ 00:07:22.489 15:58:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:22.489 15:58:58 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:22.489 15:58:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:22.489 15:58:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.489 15:58:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.489 ************************************ 00:07:22.489 START TEST nvmf_filesystem 00:07:22.489 ************************************ 00:07:22.489 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:22.752 * Looking for test storage... 00:07:22.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:22.752 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:22.753 #define SPDK_CONFIG_H 00:07:22.753 #define SPDK_CONFIG_APPS 1 00:07:22.753 #define SPDK_CONFIG_ARCH native 00:07:22.753 #undef SPDK_CONFIG_ASAN 00:07:22.753 #undef SPDK_CONFIG_AVAHI 00:07:22.753 #undef SPDK_CONFIG_CET 00:07:22.753 #define SPDK_CONFIG_COVERAGE 1 00:07:22.753 #define SPDK_CONFIG_CROSS_PREFIX 00:07:22.753 #undef SPDK_CONFIG_CRYPTO 00:07:22.753 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:22.753 #undef SPDK_CONFIG_CUSTOMOCF 00:07:22.753 #undef SPDK_CONFIG_DAOS 00:07:22.753 #define SPDK_CONFIG_DAOS_DIR 00:07:22.753 #define SPDK_CONFIG_DEBUG 1 00:07:22.753 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:22.753 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:22.753 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:22.753 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:22.753 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:22.753 #undef SPDK_CONFIG_DPDK_UADK 00:07:22.753 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:22.753 #define SPDK_CONFIG_EXAMPLES 1 00:07:22.753 #undef SPDK_CONFIG_FC 00:07:22.753 #define SPDK_CONFIG_FC_PATH 00:07:22.753 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:22.753 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:22.753 #undef SPDK_CONFIG_FUSE 00:07:22.753 #undef SPDK_CONFIG_FUZZER 00:07:22.753 #define SPDK_CONFIG_FUZZER_LIB 00:07:22.753 #undef SPDK_CONFIG_GOLANG 00:07:22.753 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:22.753 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:22.753 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:22.753 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:22.753 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:22.753 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:22.753 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:22.753 #define SPDK_CONFIG_IDXD 1 00:07:22.753 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:22.753 #undef SPDK_CONFIG_IPSEC_MB 00:07:22.753 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:22.753 #define SPDK_CONFIG_ISAL 1 00:07:22.753 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:22.753 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:22.753 #define SPDK_CONFIG_LIBDIR 00:07:22.753 #undef SPDK_CONFIG_LTO 00:07:22.753 #define SPDK_CONFIG_MAX_LCORES 128 00:07:22.753 #define SPDK_CONFIG_NVME_CUSE 1 00:07:22.753 #undef SPDK_CONFIG_OCF 00:07:22.753 #define SPDK_CONFIG_OCF_PATH 00:07:22.753 #define SPDK_CONFIG_OPENSSL_PATH 00:07:22.753 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:22.753 #define SPDK_CONFIG_PGO_DIR 00:07:22.753 #undef SPDK_CONFIG_PGO_USE 00:07:22.753 #define SPDK_CONFIG_PREFIX /usr/local 00:07:22.753 #undef SPDK_CONFIG_RAID5F 00:07:22.753 #undef SPDK_CONFIG_RBD 00:07:22.753 #define SPDK_CONFIG_RDMA 1 00:07:22.753 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:22.753 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:22.753 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:22.753 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:22.753 #define SPDK_CONFIG_SHARED 1 00:07:22.753 #undef SPDK_CONFIG_SMA 00:07:22.753 #define SPDK_CONFIG_TESTS 1 00:07:22.753 #undef SPDK_CONFIG_TSAN 00:07:22.753 #define SPDK_CONFIG_UBLK 1 00:07:22.753 #define SPDK_CONFIG_UBSAN 1 00:07:22.753 #undef SPDK_CONFIG_UNIT_TESTS 00:07:22.753 #undef SPDK_CONFIG_URING 00:07:22.753 #define SPDK_CONFIG_URING_PATH 00:07:22.753 #undef SPDK_CONFIG_URING_ZNS 00:07:22.753 #undef SPDK_CONFIG_USDT 00:07:22.753 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:22.753 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:22.753 #define SPDK_CONFIG_VFIO_USER 1 00:07:22.753 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:22.753 #define SPDK_CONFIG_VHOST 1 00:07:22.753 #define SPDK_CONFIG_VIRTIO 1 00:07:22.753 #undef SPDK_CONFIG_VTUNE 00:07:22.753 #define SPDK_CONFIG_VTUNE_DIR 00:07:22.753 #define SPDK_CONFIG_WERROR 1 00:07:22.753 #define SPDK_CONFIG_WPDK_DIR 00:07:22.753 #undef SPDK_CONFIG_XNVME 00:07:22.753 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.753 15:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:22.754 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:22.755 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2098085 ]] 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2098085 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.Goj2Kf 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Goj2Kf/tests/target /tmp/spdk.Goj2Kf 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954236928 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330192896 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118674010112 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129371013120 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10697003008 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680796160 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864503296 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874202624 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9699328 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=216064 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=287744 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684171264 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1335296 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937097216 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937101312 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:22.756 * Looking for test storage... 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118674010112 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12911595520 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:22.756 15:58:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.757 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:22.757 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.757 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.017 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.017 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.017 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.017 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.017 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.017 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.017 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:23.018 15:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.607 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.607 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:29.607 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:29.607 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:29.607 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:29.607 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:29.607 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:29.608 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:29.608 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:29.608 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:29.608 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.608 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.869 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.869 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.869 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:29.869 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.869 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.869 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.869 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:29.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:07:29.869 00:07:29.869 --- 10.0.0.2 ping statistics --- 00:07:29.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.869 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:07:29.869 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:07:29.869 00:07:29.869 --- 10.0.0.1 ping statistics --- 00:07:29.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.869 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:07:29.869 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.869 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:29.869 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:29.869 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.869 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:29.869 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:29.869 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.869 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:29.869 15:59:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:30.129 15:59:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:30.129 15:59:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:30.129 15:59:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.129 15:59:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.129 ************************************ 00:07:30.129 START TEST nvmf_filesystem_no_in_capsule 00:07:30.129 ************************************ 00:07:30.130 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:30.130 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:30.130 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:30.130 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:30.130 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:30.130 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.130 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2101867 00:07:30.130 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2101867 00:07:30.130 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:30.130 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2101867 ']' 00:07:30.130 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.130 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.130 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.130 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.130 15:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.130 [2024-07-15 15:59:05.827061] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:07:30.130 [2024-07-15 15:59:05.827108] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.130 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.130 [2024-07-15 15:59:05.892858] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.130 [2024-07-15 15:59:05.960478] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.130 [2024-07-15 15:59:05.960515] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.130 [2024-07-15 15:59:05.960523] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.130 [2024-07-15 15:59:05.960529] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.130 [2024-07-15 15:59:05.960535] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.130 [2024-07-15 15:59:05.960678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.130 [2024-07-15 15:59:05.960796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.130 [2024-07-15 15:59:05.960958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.130 [2024-07-15 15:59:05.960959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.071 [2024-07-15 15:59:06.649762] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.071 Malloc1 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.071 [2024-07-15 15:59:06.786415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:31.071 { 00:07:31.071 "name": "Malloc1", 00:07:31.071 "aliases": [ 00:07:31.071 "1d4763c0-72ee-42a4-bcea-0dec9a68b603" 00:07:31.071 ], 00:07:31.071 "product_name": "Malloc disk", 00:07:31.071 "block_size": 512, 00:07:31.071 "num_blocks": 1048576, 00:07:31.071 "uuid": "1d4763c0-72ee-42a4-bcea-0dec9a68b603", 00:07:31.071 "assigned_rate_limits": { 00:07:31.071 "rw_ios_per_sec": 0, 00:07:31.071 "rw_mbytes_per_sec": 0, 00:07:31.071 "r_mbytes_per_sec": 0, 00:07:31.071 "w_mbytes_per_sec": 0 00:07:31.071 }, 00:07:31.071 "claimed": true, 00:07:31.071 "claim_type": "exclusive_write", 00:07:31.071 "zoned": false, 00:07:31.071 "supported_io_types": { 00:07:31.071 "read": true, 00:07:31.071 "write": true, 00:07:31.071 "unmap": true, 00:07:31.071 "flush": true, 00:07:31.071 "reset": true, 00:07:31.071 "nvme_admin": false, 00:07:31.071 "nvme_io": false, 00:07:31.071 "nvme_io_md": false, 00:07:31.071 "write_zeroes": true, 00:07:31.071 "zcopy": true, 00:07:31.071 "get_zone_info": false, 00:07:31.071 "zone_management": false, 00:07:31.071 "zone_append": false, 00:07:31.071 "compare": false, 00:07:31.071 "compare_and_write": false, 00:07:31.071 "abort": true, 00:07:31.071 "seek_hole": false, 00:07:31.071 "seek_data": false, 00:07:31.071 "copy": true, 00:07:31.071 "nvme_iov_md": false 00:07:31.071 }, 00:07:31.071 "memory_domains": [ 00:07:31.071 { 00:07:31.071 "dma_device_id": "system", 00:07:31.071 "dma_device_type": 1 00:07:31.071 }, 00:07:31.071 { 00:07:31.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.071 "dma_device_type": 2 00:07:31.071 } 00:07:31.071 ], 00:07:31.071 "driver_specific": {} 00:07:31.071 } 00:07:31.071 ]' 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:31.071 15:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:32.982 15:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:32.982 15:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:32.982 15:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:32.982 15:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:32.982 15:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:34.896 15:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:35.466 15:59:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:36.407 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:36.407 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:36.407 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:36.407 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.407 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.407 ************************************ 00:07:36.407 START TEST filesystem_ext4 00:07:36.407 ************************************ 00:07:36.407 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:36.407 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:36.407 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:36.407 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:36.407 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:36.407 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:36.407 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:36.407 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:36.407 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:36.407 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:36.407 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:36.407 mke2fs 1.46.5 (30-Dec-2021) 00:07:36.669 Discarding device blocks: 0/522240 done 00:07:36.669 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:36.669 Filesystem UUID: 300cb311-4201-4847-805d-8248108330c3 00:07:36.669 Superblock backups stored on blocks: 00:07:36.669 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:36.669 00:07:36.669 Allocating group tables: 0/64 done 00:07:36.669 Writing inode tables: 0/64 done 00:07:36.669 Creating journal (8192 blocks): done 00:07:36.669 Writing superblocks and filesystem accounting information: 0/64 done 00:07:36.669 00:07:36.669 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:36.669 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:36.929 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:36.929 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:36.929 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:36.929 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:36.929 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:36.929 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:36.929 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2101867 00:07:36.929 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:36.929 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:36.929 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:36.929 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:36.929 00:07:36.929 real 0m0.529s 00:07:36.929 user 0m0.020s 00:07:36.929 sys 0m0.075s 00:07:36.929 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.929 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:36.929 ************************************ 00:07:36.929 END TEST filesystem_ext4 00:07:36.929 ************************************ 00:07:37.190 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:37.190 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:37.190 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:37.190 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.190 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.190 ************************************ 00:07:37.190 START TEST filesystem_btrfs 00:07:37.190 ************************************ 00:07:37.190 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:37.190 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:37.190 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:37.190 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:37.190 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:37.190 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:37.190 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:37.190 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:37.190 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:37.190 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:37.190 15:59:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:37.451 btrfs-progs v6.6.2 00:07:37.451 See https://btrfs.readthedocs.io for more information. 00:07:37.451 00:07:37.451 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:37.451 NOTE: several default settings have changed in version 5.15, please make sure 00:07:37.451 this does not affect your deployments: 00:07:37.451 - DUP for metadata (-m dup) 00:07:37.451 - enabled no-holes (-O no-holes) 00:07:37.451 - enabled free-space-tree (-R free-space-tree) 00:07:37.451 00:07:37.451 Label: (null) 00:07:37.451 UUID: d2314f8f-9385-4a52-98d6-97a69dcc0a71 00:07:37.451 Node size: 16384 00:07:37.451 Sector size: 4096 00:07:37.451 Filesystem size: 510.00MiB 00:07:37.451 Block group profiles: 00:07:37.451 Data: single 8.00MiB 00:07:37.451 Metadata: DUP 32.00MiB 00:07:37.451 System: DUP 8.00MiB 00:07:37.451 SSD detected: yes 00:07:37.451 Zoned device: no 00:07:37.451 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:37.451 Runtime features: free-space-tree 00:07:37.451 Checksum: crc32c 00:07:37.451 Number of devices: 1 00:07:37.451 Devices: 00:07:37.451 ID SIZE PATH 00:07:37.451 1 510.00MiB /dev/nvme0n1p1 00:07:37.451 00:07:37.451 15:59:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:37.451 15:59:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2101867 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:38.416 00:07:38.416 real 0m1.308s 00:07:38.416 user 0m0.026s 00:07:38.416 sys 0m0.133s 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:38.416 ************************************ 00:07:38.416 END TEST filesystem_btrfs 00:07:38.416 ************************************ 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.416 ************************************ 00:07:38.416 START TEST filesystem_xfs 00:07:38.416 ************************************ 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:38.416 15:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:38.677 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:38.677 = sectsz=512 attr=2, projid32bit=1 00:07:38.677 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:38.677 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:38.677 data = bsize=4096 blocks=130560, imaxpct=25 00:07:38.677 = sunit=0 swidth=0 blks 00:07:38.677 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:38.677 log =internal log bsize=4096 blocks=16384, version=2 00:07:38.677 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:38.677 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:39.624 Discarding blocks...Done. 00:07:39.624 15:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:39.624 15:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.169 15:59:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.169 15:59:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:42.169 15:59:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.169 15:59:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:42.169 15:59:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:42.169 15:59:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.169 15:59:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2101867 00:07:42.169 15:59:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.169 15:59:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.169 15:59:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.169 15:59:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.169 00:07:42.169 real 0m3.747s 00:07:42.169 user 0m0.034s 00:07:42.169 sys 0m0.068s 00:07:42.169 15:59:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.169 15:59:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:42.169 ************************************ 00:07:42.169 END TEST filesystem_xfs 00:07:42.169 ************************************ 00:07:42.169 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:42.169 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:42.429 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:42.429 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:42.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2101867 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2101867 ']' 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2101867 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2101867 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2101867' 00:07:42.689 killing process with pid 2101867 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2101867 00:07:42.689 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2101867 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:42.949 00:07:42.949 real 0m12.900s 00:07:42.949 user 0m50.865s 00:07:42.949 sys 0m1.195s 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.949 ************************************ 00:07:42.949 END TEST nvmf_filesystem_no_in_capsule 00:07:42.949 ************************************ 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.949 ************************************ 00:07:42.949 START TEST nvmf_filesystem_in_capsule 00:07:42.949 ************************************ 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2104501 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2104501 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2104501 ']' 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.949 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.950 15:59:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.210 [2024-07-15 15:59:18.815888] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:07:43.210 [2024-07-15 15:59:18.815948] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.210 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.210 [2024-07-15 15:59:18.888237] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.210 [2024-07-15 15:59:18.961949] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.210 [2024-07-15 15:59:18.961992] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.210 [2024-07-15 15:59:18.962000] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.210 [2024-07-15 15:59:18.962006] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.210 [2024-07-15 15:59:18.962012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.210 [2024-07-15 15:59:18.962165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.210 [2024-07-15 15:59:18.962382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.210 [2024-07-15 15:59:18.962383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.210 [2024-07-15 15:59:18.962230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.780 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.780 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:43.780 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:43.780 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.780 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.039 [2024-07-15 15:59:19.638749] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.039 Malloc1 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.039 [2024-07-15 15:59:19.765433] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.039 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:44.039 { 00:07:44.039 "name": "Malloc1", 00:07:44.039 "aliases": [ 00:07:44.039 "1549503c-805c-4bfd-9616-f48c2a2ed6e9" 00:07:44.039 ], 00:07:44.039 "product_name": "Malloc disk", 00:07:44.039 "block_size": 512, 00:07:44.039 "num_blocks": 1048576, 00:07:44.039 "uuid": "1549503c-805c-4bfd-9616-f48c2a2ed6e9", 00:07:44.039 "assigned_rate_limits": { 00:07:44.039 "rw_ios_per_sec": 0, 00:07:44.039 "rw_mbytes_per_sec": 0, 00:07:44.039 "r_mbytes_per_sec": 0, 00:07:44.040 "w_mbytes_per_sec": 0 00:07:44.040 }, 00:07:44.040 "claimed": true, 00:07:44.040 "claim_type": "exclusive_write", 00:07:44.040 "zoned": false, 00:07:44.040 "supported_io_types": { 00:07:44.040 "read": true, 00:07:44.040 "write": true, 00:07:44.040 "unmap": true, 00:07:44.040 "flush": true, 00:07:44.040 "reset": true, 00:07:44.040 "nvme_admin": false, 00:07:44.040 "nvme_io": false, 00:07:44.040 "nvme_io_md": false, 00:07:44.040 "write_zeroes": true, 00:07:44.040 "zcopy": true, 00:07:44.040 "get_zone_info": false, 00:07:44.040 "zone_management": false, 00:07:44.040 "zone_append": false, 00:07:44.040 "compare": false, 00:07:44.040 "compare_and_write": false, 00:07:44.040 "abort": true, 00:07:44.040 "seek_hole": false, 00:07:44.040 "seek_data": false, 00:07:44.040 "copy": true, 00:07:44.040 "nvme_iov_md": false 00:07:44.040 }, 00:07:44.040 "memory_domains": [ 00:07:44.040 { 00:07:44.040 "dma_device_id": "system", 00:07:44.040 "dma_device_type": 1 00:07:44.040 }, 00:07:44.040 { 00:07:44.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.040 "dma_device_type": 2 00:07:44.040 } 00:07:44.040 ], 00:07:44.040 "driver_specific": {} 00:07:44.040 } 00:07:44.040 ]' 00:07:44.040 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:44.040 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:44.040 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:44.300 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:44.300 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:44.300 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:44.300 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:44.300 15:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:45.682 15:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:45.682 15:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:45.682 15:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:45.682 15:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:45.682 15:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:47.593 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:47.593 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:47.593 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:47.593 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:47.593 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:47.593 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:47.593 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:47.593 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:47.593 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:47.593 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:47.593 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:47.593 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:47.593 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:47.593 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:47.593 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:47.593 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:47.593 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:47.854 15:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:48.425 15:59:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:49.364 15:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:49.364 15:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:49.364 15:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:49.364 15:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.364 15:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.364 ************************************ 00:07:49.364 START TEST filesystem_in_capsule_ext4 00:07:49.364 ************************************ 00:07:49.364 15:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:49.364 15:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:49.364 15:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.364 15:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:49.364 15:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:49.364 15:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:49.364 15:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:49.364 15:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:49.364 15:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:49.364 15:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:49.364 15:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:49.364 mke2fs 1.46.5 (30-Dec-2021) 00:07:49.624 Discarding device blocks: 0/522240 done 00:07:49.624 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:49.624 Filesystem UUID: 5e84461b-1c0f-41b3-9161-498a83bb1a2e 00:07:49.624 Superblock backups stored on blocks: 00:07:49.624 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:49.624 00:07:49.624 Allocating group tables: 0/64 done 00:07:49.624 Writing inode tables: 0/64 done 00:07:50.564 Creating journal (8192 blocks): done 00:07:50.564 Writing superblocks and filesystem accounting information: 0/64 done 00:07:50.564 00:07:50.564 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:50.564 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.564 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.564 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:50.564 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.564 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:50.564 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:50.823 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2104501 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.824 00:07:50.824 real 0m1.255s 00:07:50.824 user 0m0.021s 00:07:50.824 sys 0m0.076s 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:50.824 ************************************ 00:07:50.824 END TEST filesystem_in_capsule_ext4 00:07:50.824 ************************************ 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.824 ************************************ 00:07:50.824 START TEST filesystem_in_capsule_btrfs 00:07:50.824 ************************************ 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:50.824 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:51.084 btrfs-progs v6.6.2 00:07:51.084 See https://btrfs.readthedocs.io for more information. 00:07:51.084 00:07:51.084 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:51.084 NOTE: several default settings have changed in version 5.15, please make sure 00:07:51.084 this does not affect your deployments: 00:07:51.084 - DUP for metadata (-m dup) 00:07:51.084 - enabled no-holes (-O no-holes) 00:07:51.084 - enabled free-space-tree (-R free-space-tree) 00:07:51.084 00:07:51.084 Label: (null) 00:07:51.084 UUID: d9919f74-6375-4329-bcad-109abcfc136c 00:07:51.084 Node size: 16384 00:07:51.084 Sector size: 4096 00:07:51.084 Filesystem size: 510.00MiB 00:07:51.084 Block group profiles: 00:07:51.084 Data: single 8.00MiB 00:07:51.084 Metadata: DUP 32.00MiB 00:07:51.084 System: DUP 8.00MiB 00:07:51.084 SSD detected: yes 00:07:51.084 Zoned device: no 00:07:51.084 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:51.084 Runtime features: free-space-tree 00:07:51.084 Checksum: crc32c 00:07:51.084 Number of devices: 1 00:07:51.084 Devices: 00:07:51.084 ID SIZE PATH 00:07:51.084 1 510.00MiB /dev/nvme0n1p1 00:07:51.084 00:07:51.084 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:51.084 15:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:51.653 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:51.653 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:51.653 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:51.653 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:51.653 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:51.653 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:51.653 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2104501 00:07:51.653 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:51.653 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:51.912 00:07:51.912 real 0m0.980s 00:07:51.912 user 0m0.032s 00:07:51.912 sys 0m0.130s 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:51.912 ************************************ 00:07:51.912 END TEST filesystem_in_capsule_btrfs 00:07:51.912 ************************************ 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.912 ************************************ 00:07:51.912 START TEST filesystem_in_capsule_xfs 00:07:51.912 ************************************ 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:51.912 15:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:51.912 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:51.912 = sectsz=512 attr=2, projid32bit=1 00:07:51.912 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:51.912 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:51.912 data = bsize=4096 blocks=130560, imaxpct=25 00:07:51.912 = sunit=0 swidth=0 blks 00:07:51.912 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:51.912 log =internal log bsize=4096 blocks=16384, version=2 00:07:51.912 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:51.912 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:52.850 Discarding blocks...Done. 00:07:52.850 15:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:52.850 15:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:54.760 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:54.760 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:54.760 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:54.760 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:54.760 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:54.760 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:54.760 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2104501 00:07:54.760 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:54.760 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:54.760 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:54.760 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:54.760 00:07:54.760 real 0m2.954s 00:07:54.760 user 0m0.033s 00:07:54.760 sys 0m0.069s 00:07:54.760 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.760 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:54.760 ************************************ 00:07:54.760 END TEST filesystem_in_capsule_xfs 00:07:54.760 ************************************ 00:07:54.760 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:54.760 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:55.020 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:55.020 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:55.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.020 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:55.020 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:55.020 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:55.020 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:55.020 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:55.020 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:55.020 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:55.020 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:55.020 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.020 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.020 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.020 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:55.020 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2104501 00:07:55.020 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2104501 ']' 00:07:55.021 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2104501 00:07:55.021 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:55.021 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:55.021 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2104501 00:07:55.282 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:55.282 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:55.282 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2104501' 00:07:55.282 killing process with pid 2104501 00:07:55.282 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2104501 00:07:55.282 15:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2104501 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:55.542 00:07:55.542 real 0m12.372s 00:07:55.542 user 0m48.698s 00:07:55.542 sys 0m1.231s 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.542 ************************************ 00:07:55.542 END TEST nvmf_filesystem_in_capsule 00:07:55.542 ************************************ 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:55.542 rmmod nvme_tcp 00:07:55.542 rmmod nvme_fabrics 00:07:55.542 rmmod nvme_keyring 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.542 15:59:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.087 15:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:58.087 00:07:58.087 real 0m34.986s 00:07:58.087 user 1m41.797s 00:07:58.087 sys 0m7.832s 00:07:58.087 15:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.087 15:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.087 ************************************ 00:07:58.087 END TEST nvmf_filesystem 00:07:58.087 ************************************ 00:07:58.087 15:59:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:58.087 15:59:33 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:58.087 15:59:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:58.087 15:59:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.087 15:59:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:58.087 ************************************ 00:07:58.087 START TEST nvmf_target_discovery 00:07:58.087 ************************************ 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:58.087 * Looking for test storage... 00:07:58.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:58.087 15:59:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.707 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.707 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:04.707 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:04.707 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:04.707 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:04.707 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:04.707 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:04.707 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:04.707 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:04.707 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:04.707 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:04.707 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:04.707 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:04.708 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:04.708 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:04.708 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:04.708 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:04.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:08:04.708 00:08:04.708 --- 10.0.0.2 ping statistics --- 00:08:04.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.708 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:08:04.708 00:08:04.708 --- 10.0.0.1 ping statistics --- 00:08:04.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.708 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2111362 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2111362 00:08:04.708 15:59:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:04.709 15:59:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2111362 ']' 00:08:04.709 15:59:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.709 15:59:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:04.709 15:59:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.709 15:59:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:04.709 15:59:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.970 [2024-07-15 15:59:40.557625] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:08:04.970 [2024-07-15 15:59:40.557675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.970 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.970 [2024-07-15 15:59:40.623290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.970 [2024-07-15 15:59:40.688550] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.970 [2024-07-15 15:59:40.688589] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.970 [2024-07-15 15:59:40.688596] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.970 [2024-07-15 15:59:40.688603] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.970 [2024-07-15 15:59:40.688609] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.970 [2024-07-15 15:59:40.688754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.970 [2024-07-15 15:59:40.688867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.970 [2024-07-15 15:59:40.689023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.970 [2024-07-15 15:59:40.689024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.542 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:05.542 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:05.542 15:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:05.542 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:05.542 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.542 15:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.542 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:05.542 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.542 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.542 [2024-07-15 15:59:41.370737] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.542 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.542 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.803 Null1 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.803 [2024-07-15 15:59:41.431064] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.803 Null2 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.803 Null3 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.803 Null4 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.803 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.804 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.804 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.804 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:05.804 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.804 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.804 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.804 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:08:06.064 00:08:06.064 Discovery Log Number of Records 6, Generation counter 6 00:08:06.064 =====Discovery Log Entry 0====== 00:08:06.064 trtype: tcp 00:08:06.064 adrfam: ipv4 00:08:06.064 subtype: current discovery subsystem 00:08:06.064 treq: not required 00:08:06.064 portid: 0 00:08:06.064 trsvcid: 4420 00:08:06.064 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:06.064 traddr: 10.0.0.2 00:08:06.064 eflags: explicit discovery connections, duplicate discovery information 00:08:06.064 sectype: none 00:08:06.065 =====Discovery Log Entry 1====== 00:08:06.065 trtype: tcp 00:08:06.065 adrfam: ipv4 00:08:06.065 subtype: nvme subsystem 00:08:06.065 treq: not required 00:08:06.065 portid: 0 00:08:06.065 trsvcid: 4420 00:08:06.065 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:06.065 traddr: 10.0.0.2 00:08:06.065 eflags: none 00:08:06.065 sectype: none 00:08:06.065 =====Discovery Log Entry 2====== 00:08:06.065 trtype: tcp 00:08:06.065 adrfam: ipv4 00:08:06.065 subtype: nvme subsystem 00:08:06.065 treq: not required 00:08:06.065 portid: 0 00:08:06.065 trsvcid: 4420 00:08:06.065 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:06.065 traddr: 10.0.0.2 00:08:06.065 eflags: none 00:08:06.065 sectype: none 00:08:06.065 =====Discovery Log Entry 3====== 00:08:06.065 trtype: tcp 00:08:06.065 adrfam: ipv4 00:08:06.065 subtype: nvme subsystem 00:08:06.065 treq: not required 00:08:06.065 portid: 0 00:08:06.065 trsvcid: 4420 00:08:06.065 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:06.065 traddr: 10.0.0.2 00:08:06.065 eflags: none 00:08:06.065 sectype: none 00:08:06.065 =====Discovery Log Entry 4====== 00:08:06.065 trtype: tcp 00:08:06.065 adrfam: ipv4 00:08:06.065 subtype: nvme subsystem 00:08:06.065 treq: not required 00:08:06.065 portid: 0 00:08:06.065 trsvcid: 4420 00:08:06.065 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:06.065 traddr: 10.0.0.2 00:08:06.065 eflags: none 00:08:06.065 sectype: none 00:08:06.065 =====Discovery Log Entry 5====== 00:08:06.065 trtype: tcp 00:08:06.065 adrfam: ipv4 00:08:06.065 subtype: discovery subsystem referral 00:08:06.065 treq: not required 00:08:06.065 portid: 0 00:08:06.065 trsvcid: 4430 00:08:06.065 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:06.065 traddr: 10.0.0.2 00:08:06.065 eflags: none 00:08:06.065 sectype: none 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:06.065 Perform nvmf subsystem discovery via RPC 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 [ 00:08:06.065 { 00:08:06.065 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:06.065 "subtype": "Discovery", 00:08:06.065 "listen_addresses": [ 00:08:06.065 { 00:08:06.065 "trtype": "TCP", 00:08:06.065 "adrfam": "IPv4", 00:08:06.065 "traddr": "10.0.0.2", 00:08:06.065 "trsvcid": "4420" 00:08:06.065 } 00:08:06.065 ], 00:08:06.065 "allow_any_host": true, 00:08:06.065 "hosts": [] 00:08:06.065 }, 00:08:06.065 { 00:08:06.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:06.065 "subtype": "NVMe", 00:08:06.065 "listen_addresses": [ 00:08:06.065 { 00:08:06.065 "trtype": "TCP", 00:08:06.065 "adrfam": "IPv4", 00:08:06.065 "traddr": "10.0.0.2", 00:08:06.065 "trsvcid": "4420" 00:08:06.065 } 00:08:06.065 ], 00:08:06.065 "allow_any_host": true, 00:08:06.065 "hosts": [], 00:08:06.065 "serial_number": "SPDK00000000000001", 00:08:06.065 "model_number": "SPDK bdev Controller", 00:08:06.065 "max_namespaces": 32, 00:08:06.065 "min_cntlid": 1, 00:08:06.065 "max_cntlid": 65519, 00:08:06.065 "namespaces": [ 00:08:06.065 { 00:08:06.065 "nsid": 1, 00:08:06.065 "bdev_name": "Null1", 00:08:06.065 "name": "Null1", 00:08:06.065 "nguid": "CB620A6D8DF24DEFA74C889215521870", 00:08:06.065 "uuid": "cb620a6d-8df2-4def-a74c-889215521870" 00:08:06.065 } 00:08:06.065 ] 00:08:06.065 }, 00:08:06.065 { 00:08:06.065 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:06.065 "subtype": "NVMe", 00:08:06.065 "listen_addresses": [ 00:08:06.065 { 00:08:06.065 "trtype": "TCP", 00:08:06.065 "adrfam": "IPv4", 00:08:06.065 "traddr": "10.0.0.2", 00:08:06.065 "trsvcid": "4420" 00:08:06.065 } 00:08:06.065 ], 00:08:06.065 "allow_any_host": true, 00:08:06.065 "hosts": [], 00:08:06.065 "serial_number": "SPDK00000000000002", 00:08:06.065 "model_number": "SPDK bdev Controller", 00:08:06.065 "max_namespaces": 32, 00:08:06.065 "min_cntlid": 1, 00:08:06.065 "max_cntlid": 65519, 00:08:06.065 "namespaces": [ 00:08:06.065 { 00:08:06.065 "nsid": 1, 00:08:06.065 "bdev_name": "Null2", 00:08:06.065 "name": "Null2", 00:08:06.065 "nguid": "1F6B4D4C791D42E7925E933B13BA03F7", 00:08:06.065 "uuid": "1f6b4d4c-791d-42e7-925e-933b13ba03f7" 00:08:06.065 } 00:08:06.065 ] 00:08:06.065 }, 00:08:06.065 { 00:08:06.065 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:06.065 "subtype": "NVMe", 00:08:06.065 "listen_addresses": [ 00:08:06.065 { 00:08:06.065 "trtype": "TCP", 00:08:06.065 "adrfam": "IPv4", 00:08:06.065 "traddr": "10.0.0.2", 00:08:06.065 "trsvcid": "4420" 00:08:06.065 } 00:08:06.065 ], 00:08:06.065 "allow_any_host": true, 00:08:06.065 "hosts": [], 00:08:06.065 "serial_number": "SPDK00000000000003", 00:08:06.065 "model_number": "SPDK bdev Controller", 00:08:06.065 "max_namespaces": 32, 00:08:06.065 "min_cntlid": 1, 00:08:06.065 "max_cntlid": 65519, 00:08:06.065 "namespaces": [ 00:08:06.065 { 00:08:06.065 "nsid": 1, 00:08:06.065 "bdev_name": "Null3", 00:08:06.065 "name": "Null3", 00:08:06.065 "nguid": "BFC60B648E69469C82F8956FE64B1968", 00:08:06.065 "uuid": "bfc60b64-8e69-469c-82f8-956fe64b1968" 00:08:06.065 } 00:08:06.065 ] 00:08:06.065 }, 00:08:06.065 { 00:08:06.065 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:06.065 "subtype": "NVMe", 00:08:06.065 "listen_addresses": [ 00:08:06.065 { 00:08:06.065 "trtype": "TCP", 00:08:06.065 "adrfam": "IPv4", 00:08:06.065 "traddr": "10.0.0.2", 00:08:06.065 "trsvcid": "4420" 00:08:06.065 } 00:08:06.065 ], 00:08:06.065 "allow_any_host": true, 00:08:06.065 "hosts": [], 00:08:06.065 "serial_number": "SPDK00000000000004", 00:08:06.065 "model_number": "SPDK bdev Controller", 00:08:06.065 "max_namespaces": 32, 00:08:06.065 "min_cntlid": 1, 00:08:06.065 "max_cntlid": 65519, 00:08:06.065 "namespaces": [ 00:08:06.065 { 00:08:06.065 "nsid": 1, 00:08:06.065 "bdev_name": "Null4", 00:08:06.065 "name": "Null4", 00:08:06.065 "nguid": "057A9560A5994AE887EB5035CD1B16D4", 00:08:06.065 "uuid": "057a9560-a599-4ae8-87eb-5035cd1b16d4" 00:08:06.065 } 00:08:06.065 ] 00:08:06.065 } 00:08:06.065 ] 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.326 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:06.326 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:06.326 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:06.327 15:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:06.327 15:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:06.327 15:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:06.327 15:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:06.327 15:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:06.327 15:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:06.327 15:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:06.327 rmmod nvme_tcp 00:08:06.327 rmmod nvme_fabrics 00:08:06.327 rmmod nvme_keyring 00:08:06.327 15:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:06.327 15:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:06.327 15:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:06.327 15:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2111362 ']' 00:08:06.327 15:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2111362 00:08:06.327 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2111362 ']' 00:08:06.327 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2111362 00:08:06.327 15:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:06.327 15:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:06.327 15:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2111362 00:08:06.327 15:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:06.327 15:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:06.327 15:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2111362' 00:08:06.327 killing process with pid 2111362 00:08:06.327 15:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2111362 00:08:06.327 15:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2111362 00:08:06.588 15:59:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:06.588 15:59:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:06.588 15:59:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:06.588 15:59:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:06.588 15:59:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:06.588 15:59:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.588 15:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.588 15:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.509 15:59:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:08.509 00:08:08.509 real 0m10.879s 00:08:08.509 user 0m8.152s 00:08:08.509 sys 0m5.514s 00:08:08.509 15:59:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.509 15:59:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.509 ************************************ 00:08:08.509 END TEST nvmf_target_discovery 00:08:08.509 ************************************ 00:08:08.509 15:59:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:08.509 15:59:44 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:08.509 15:59:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:08.509 15:59:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.509 15:59:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:08.509 ************************************ 00:08:08.509 START TEST nvmf_referrals 00:08:08.509 ************************************ 00:08:08.509 15:59:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:08.770 * Looking for test storage... 00:08:08.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:08.770 15:59:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:15.387 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:15.387 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.387 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:15.388 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.388 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:15.649 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.649 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.910 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.910 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:15.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.717 ms 00:08:15.910 00:08:15.910 --- 10.0.0.2 ping statistics --- 00:08:15.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.910 rtt min/avg/max/mdev = 0.717/0.717/0.717/0.000 ms 00:08:15.910 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:08:15.910 00:08:15.910 --- 10.0.0.1 ping statistics --- 00:08:15.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.910 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:08:15.910 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.910 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:15.910 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:15.910 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.910 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:15.910 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:15.910 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.910 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:15.910 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:15.910 15:59:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:15.910 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:15.910 15:59:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:15.910 15:59:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.910 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2115865 00:08:15.911 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2115865 00:08:15.911 15:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.911 15:59:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2115865 ']' 00:08:15.911 15:59:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.911 15:59:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.911 15:59:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.911 15:59:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.911 15:59:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.911 [2024-07-15 15:59:51.628718] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:08:15.911 [2024-07-15 15:59:51.628784] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.911 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.911 [2024-07-15 15:59:51.703049] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.172 [2024-07-15 15:59:51.778898] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.172 [2024-07-15 15:59:51.778939] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.172 [2024-07-15 15:59:51.778947] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.172 [2024-07-15 15:59:51.778953] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.172 [2024-07-15 15:59:51.778959] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.172 [2024-07-15 15:59:51.779109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.172 [2024-07-15 15:59:51.779257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.172 [2024-07-15 15:59:51.779498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.172 [2024-07-15 15:59:51.779499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.744 [2024-07-15 15:59:52.465757] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.744 [2024-07-15 15:59:52.481951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:16.744 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.006 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.006 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:17.006 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:17.006 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:17.006 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:17.006 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:17.006 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.006 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:17.006 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:17.006 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:17.006 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:17.006 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:17.006 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.006 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.267 15:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.267 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.527 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:17.527 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:17.527 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:17.527 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:17.527 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:17.527 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.527 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:17.527 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:17.527 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:17.527 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:17.527 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:17.528 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:17.528 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:17.528 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.528 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:17.528 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:17.528 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:17.528 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:17.528 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:17.528 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.528 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:17.788 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:18.049 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:18.049 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:18.049 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:18.049 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:18.049 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:18.049 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.049 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:18.049 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:18.049 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:18.049 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:18.049 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:18.049 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.049 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:18.310 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:18.310 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:18.310 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.310 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.310 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.310 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:18.310 15:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:18.310 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.310 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.310 15:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.310 15:59:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:18.310 15:59:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:18.310 15:59:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:18.310 15:59:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:18.310 15:59:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.310 15:59:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:18.310 15:59:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:18.571 rmmod nvme_tcp 00:08:18.571 rmmod nvme_fabrics 00:08:18.571 rmmod nvme_keyring 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2115865 ']' 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2115865 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2115865 ']' 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2115865 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2115865 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2115865' 00:08:18.571 killing process with pid 2115865 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2115865 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2115865 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.571 15:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.120 15:59:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:21.120 00:08:21.120 real 0m12.141s 00:08:21.120 user 0m13.422s 00:08:21.120 sys 0m5.945s 00:08:21.120 15:59:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.120 15:59:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.120 ************************************ 00:08:21.120 END TEST nvmf_referrals 00:08:21.120 ************************************ 00:08:21.120 15:59:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:21.120 15:59:56 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:21.120 15:59:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:21.120 15:59:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.120 15:59:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:21.120 ************************************ 00:08:21.120 START TEST nvmf_connect_disconnect 00:08:21.120 ************************************ 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:21.120 * Looking for test storage... 00:08:21.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:21.120 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:21.121 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:21.121 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:21.121 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.121 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:21.121 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:21.121 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:21.121 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.121 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.121 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.121 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:21.121 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:21.121 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:21.121 15:59:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:27.734 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:27.734 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.734 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:27.735 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:27.735 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:27.735 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:27.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:08:27.996 00:08:27.996 --- 10.0.0.2 ping statistics --- 00:08:27.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.996 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.402 ms 00:08:27.996 00:08:27.996 --- 10.0.0.1 ping statistics --- 00:08:27.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.996 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2120791 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2120791 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2120791 ']' 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.996 16:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:27.996 [2024-07-15 16:00:03.796870] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:08:27.996 [2024-07-15 16:00:03.796923] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.996 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.257 [2024-07-15 16:00:03.863238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.257 [2024-07-15 16:00:03.931369] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.257 [2024-07-15 16:00:03.931407] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.257 [2024-07-15 16:00:03.931415] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.257 [2024-07-15 16:00:03.931422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.257 [2024-07-15 16:00:03.931427] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.257 [2024-07-15 16:00:03.931488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.257 [2024-07-15 16:00:03.931604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.257 [2024-07-15 16:00:03.931761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.257 [2024-07-15 16:00:03.931762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:28.833 [2024-07-15 16:00:04.618765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.833 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:29.094 [2024-07-15 16:00:04.678102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.094 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.094 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:29.094 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:29.094 16:00:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:33.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:47.413 rmmod nvme_tcp 00:08:47.413 rmmod nvme_fabrics 00:08:47.413 rmmod nvme_keyring 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2120791 ']' 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2120791 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2120791 ']' 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2120791 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2120791 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2120791' 00:08:47.413 killing process with pid 2120791 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2120791 00:08:47.413 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2120791 00:08:47.673 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:47.673 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:47.673 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:47.674 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:47.674 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:47.674 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.674 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.674 16:00:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.587 16:00:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:49.587 00:08:49.587 real 0m28.802s 00:08:49.587 user 1m19.080s 00:08:49.587 sys 0m6.468s 00:08:49.587 16:00:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.587 16:00:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.587 ************************************ 00:08:49.587 END TEST nvmf_connect_disconnect 00:08:49.587 ************************************ 00:08:49.587 16:00:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:49.587 16:00:25 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:49.587 16:00:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:49.587 16:00:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.587 16:00:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:49.848 ************************************ 00:08:49.848 START TEST nvmf_multitarget 00:08:49.848 ************************************ 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:49.848 * Looking for test storage... 00:08:49.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.848 16:00:25 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:49.849 16:00:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:58.017 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:58.017 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:58.017 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:58.017 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:58.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:08:58.017 00:08:58.017 --- 10.0.0.2 ping statistics --- 00:08:58.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.017 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:58.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:08:58.017 00:08:58.017 --- 10.0.0.1 ping statistics --- 00:08:58.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.017 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2129207 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2129207 00:08:58.017 16:00:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:58.018 16:00:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2129207 ']' 00:08:58.018 16:00:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.018 16:00:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:58.018 16:00:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.018 16:00:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:58.018 16:00:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:58.018 [2024-07-15 16:00:32.811978] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:08:58.018 [2024-07-15 16:00:32.812041] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.018 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.018 [2024-07-15 16:00:32.883315] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.018 [2024-07-15 16:00:32.959391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.018 [2024-07-15 16:00:32.959429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.018 [2024-07-15 16:00:32.959437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.018 [2024-07-15 16:00:32.959443] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.018 [2024-07-15 16:00:32.959449] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.018 [2024-07-15 16:00:32.959586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.018 [2024-07-15 16:00:32.959709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.018 [2024-07-15 16:00:32.959866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.018 [2024-07-15 16:00:32.959867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.018 16:00:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.018 16:00:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:58.018 16:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.018 16:00:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:58.018 16:00:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:58.018 16:00:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.018 16:00:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:58.018 16:00:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:58.018 16:00:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:58.018 16:00:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:58.018 16:00:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:58.018 "nvmf_tgt_1" 00:08:58.018 16:00:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:58.279 "nvmf_tgt_2" 00:08:58.279 16:00:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:58.279 16:00:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:58.279 16:00:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:58.279 16:00:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:58.541 true 00:08:58.541 16:00:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:58.541 true 00:08:58.541 16:00:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:58.541 16:00:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:58.541 16:00:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:58.541 16:00:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:58.541 16:00:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:58.541 16:00:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:58.541 16:00:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:58.541 16:00:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.541 16:00:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:58.541 16:00:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.541 16:00:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.541 rmmod nvme_tcp 00:08:58.541 rmmod nvme_fabrics 00:08:58.802 rmmod nvme_keyring 00:08:58.802 16:00:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.802 16:00:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:58.802 16:00:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:58.802 16:00:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2129207 ']' 00:08:58.802 16:00:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2129207 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2129207 ']' 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2129207 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2129207 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2129207' 00:08:58.803 killing process with pid 2129207 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2129207 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2129207 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.803 16:00:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.347 16:00:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:01.347 00:09:01.347 real 0m11.246s 00:09:01.347 user 0m9.283s 00:09:01.347 sys 0m5.826s 00:09:01.347 16:00:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.347 16:00:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:01.347 ************************************ 00:09:01.347 END TEST nvmf_multitarget 00:09:01.347 ************************************ 00:09:01.347 16:00:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:01.347 16:00:36 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:01.347 16:00:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.347 16:00:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.347 16:00:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:01.347 ************************************ 00:09:01.347 START TEST nvmf_rpc 00:09:01.347 ************************************ 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:01.347 * Looking for test storage... 00:09:01.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:01.347 16:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:07.971 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:07.971 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:07.971 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:07.971 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:07.971 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.232 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:08.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:09:08.233 00:09:08.233 --- 10.0.0.2 ping statistics --- 00:09:08.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.233 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.437 ms 00:09:08.233 00:09:08.233 --- 10.0.0.1 ping statistics --- 00:09:08.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.233 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2133877 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2133877 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2133877 ']' 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.233 16:00:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.233 [2024-07-15 16:00:44.034572] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:09:08.233 [2024-07-15 16:00:44.034626] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.233 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.501 [2024-07-15 16:00:44.102350] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.501 [2024-07-15 16:00:44.169847] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.501 [2024-07-15 16:00:44.169884] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.501 [2024-07-15 16:00:44.169891] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.501 [2024-07-15 16:00:44.169898] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.501 [2024-07-15 16:00:44.169904] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.501 [2024-07-15 16:00:44.170045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.501 [2024-07-15 16:00:44.170159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.501 [2024-07-15 16:00:44.170258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.501 [2024-07-15 16:00:44.170259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.075 16:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.075 16:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:09.075 16:00:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.075 16:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:09.076 16:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.076 16:00:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.076 16:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:09.076 16:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.076 16:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.076 16:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.076 16:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:09.076 "tick_rate": 2400000000, 00:09:09.076 "poll_groups": [ 00:09:09.076 { 00:09:09.076 "name": "nvmf_tgt_poll_group_000", 00:09:09.076 "admin_qpairs": 0, 00:09:09.076 "io_qpairs": 0, 00:09:09.076 "current_admin_qpairs": 0, 00:09:09.076 "current_io_qpairs": 0, 00:09:09.076 "pending_bdev_io": 0, 00:09:09.076 "completed_nvme_io": 0, 00:09:09.076 "transports": [] 00:09:09.076 }, 00:09:09.076 { 00:09:09.076 "name": "nvmf_tgt_poll_group_001", 00:09:09.076 "admin_qpairs": 0, 00:09:09.076 "io_qpairs": 0, 00:09:09.076 "current_admin_qpairs": 0, 00:09:09.076 "current_io_qpairs": 0, 00:09:09.076 "pending_bdev_io": 0, 00:09:09.076 "completed_nvme_io": 0, 00:09:09.076 "transports": [] 00:09:09.076 }, 00:09:09.076 { 00:09:09.076 "name": "nvmf_tgt_poll_group_002", 00:09:09.076 "admin_qpairs": 0, 00:09:09.076 "io_qpairs": 0, 00:09:09.076 "current_admin_qpairs": 0, 00:09:09.076 "current_io_qpairs": 0, 00:09:09.076 "pending_bdev_io": 0, 00:09:09.076 "completed_nvme_io": 0, 00:09:09.076 "transports": [] 00:09:09.076 }, 00:09:09.076 { 00:09:09.076 "name": "nvmf_tgt_poll_group_003", 00:09:09.076 "admin_qpairs": 0, 00:09:09.076 "io_qpairs": 0, 00:09:09.076 "current_admin_qpairs": 0, 00:09:09.076 "current_io_qpairs": 0, 00:09:09.076 "pending_bdev_io": 0, 00:09:09.076 "completed_nvme_io": 0, 00:09:09.076 "transports": [] 00:09:09.076 } 00:09:09.076 ] 00:09:09.076 }' 00:09:09.076 16:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:09.076 16:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:09.076 16:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:09.076 16:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:09.076 16:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:09.076 16:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:09.336 16:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:09.336 16:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:09.336 16:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.336 16:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.336 [2024-07-15 16:00:44.964156] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.336 16:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.336 16:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:09.336 16:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.336 16:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.336 16:00:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.336 16:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:09.336 "tick_rate": 2400000000, 00:09:09.336 "poll_groups": [ 00:09:09.336 { 00:09:09.336 "name": "nvmf_tgt_poll_group_000", 00:09:09.336 "admin_qpairs": 0, 00:09:09.336 "io_qpairs": 0, 00:09:09.336 "current_admin_qpairs": 0, 00:09:09.336 "current_io_qpairs": 0, 00:09:09.336 "pending_bdev_io": 0, 00:09:09.336 "completed_nvme_io": 0, 00:09:09.336 "transports": [ 00:09:09.336 { 00:09:09.336 "trtype": "TCP" 00:09:09.336 } 00:09:09.336 ] 00:09:09.336 }, 00:09:09.336 { 00:09:09.336 "name": "nvmf_tgt_poll_group_001", 00:09:09.336 "admin_qpairs": 0, 00:09:09.336 "io_qpairs": 0, 00:09:09.336 "current_admin_qpairs": 0, 00:09:09.336 "current_io_qpairs": 0, 00:09:09.336 "pending_bdev_io": 0, 00:09:09.336 "completed_nvme_io": 0, 00:09:09.336 "transports": [ 00:09:09.336 { 00:09:09.336 "trtype": "TCP" 00:09:09.336 } 00:09:09.336 ] 00:09:09.336 }, 00:09:09.336 { 00:09:09.336 "name": "nvmf_tgt_poll_group_002", 00:09:09.336 "admin_qpairs": 0, 00:09:09.336 "io_qpairs": 0, 00:09:09.336 "current_admin_qpairs": 0, 00:09:09.336 "current_io_qpairs": 0, 00:09:09.336 "pending_bdev_io": 0, 00:09:09.336 "completed_nvme_io": 0, 00:09:09.336 "transports": [ 00:09:09.336 { 00:09:09.336 "trtype": "TCP" 00:09:09.336 } 00:09:09.336 ] 00:09:09.336 }, 00:09:09.336 { 00:09:09.336 "name": "nvmf_tgt_poll_group_003", 00:09:09.336 "admin_qpairs": 0, 00:09:09.336 "io_qpairs": 0, 00:09:09.336 "current_admin_qpairs": 0, 00:09:09.336 "current_io_qpairs": 0, 00:09:09.336 "pending_bdev_io": 0, 00:09:09.336 "completed_nvme_io": 0, 00:09:09.336 "transports": [ 00:09:09.336 { 00:09:09.336 "trtype": "TCP" 00:09:09.336 } 00:09:09.336 ] 00:09:09.336 } 00:09:09.336 ] 00:09:09.336 }' 00:09:09.336 16:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:09.336 16:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:09.336 16:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:09.336 16:00:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.336 Malloc1 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.336 [2024-07-15 16:00:45.151973] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:09.336 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:09.604 [2024-07-15 16:00:45.178945] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:09.604 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:09.604 could not add new controller: failed to write to nvme-fabrics device 00:09:09.604 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:09.604 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:09.604 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:09.604 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:09.604 16:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:09.604 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.604 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.604 16:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.604 16:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:10.991 16:00:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:10.991 16:00:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:10.991 16:00:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.991 16:00:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:10.991 16:00:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:13.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:13.535 [2024-07-15 16:00:48.937473] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:13.535 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:13.535 could not add new controller: failed to write to nvme-fabrics device 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.535 16:00:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.918 16:00:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:14.918 16:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:14.918 16:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:14.918 16:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:14.918 16:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:16.838 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:16.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.839 [2024-07-15 16:00:52.667726] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.839 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.100 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.100 16:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:17.100 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.100 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.100 16:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.100 16:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:18.482 16:00:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:18.482 16:00:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:18.482 16:00:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.482 16:00:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:18.482 16:00:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:20.392 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:20.393 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:20.393 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.393 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:20.393 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.393 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:20.393 16:00:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.653 [2024-07-15 16:00:56.395145] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.653 16:00:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:22.563 16:00:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:22.563 16:00:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:22.563 16:00:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:22.563 16:00:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:22.563 16:00:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:24.536 16:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:24.536 16:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:24.536 16:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:24.536 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:24.536 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:24.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.537 [2024-07-15 16:01:00.152581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.537 16:01:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.921 16:01:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:25.921 16:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:25.921 16:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:25.921 16:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:25.921 16:01:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:27.889 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:27.889 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:27.889 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:27.889 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:27.889 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:27.889 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:27.889 16:01:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:28.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.158 [2024-07-15 16:01:03.864424] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.158 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.159 16:01:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.159 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.159 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.159 16:01:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.159 16:01:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:30.072 16:01:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:30.072 16:01:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:30.072 16:01:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.072 16:01:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:30.072 16:01:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:31.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.987 [2024-07-15 16:01:07.620346] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.987 16:01:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:33.371 16:01:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:33.371 16:01:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:33.371 16:01:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:33.371 16:01:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:33.371 16:01:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:35.918 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:35.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 [2024-07-15 16:01:11.330302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 [2024-07-15 16:01:11.394479] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 [2024-07-15 16:01:11.458666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 [2024-07-15 16:01:11.518857] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:35.919 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.920 [2024-07-15 16:01:11.579049] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:35.920 "tick_rate": 2400000000, 00:09:35.920 "poll_groups": [ 00:09:35.920 { 00:09:35.920 "name": "nvmf_tgt_poll_group_000", 00:09:35.920 "admin_qpairs": 0, 00:09:35.920 "io_qpairs": 224, 00:09:35.920 "current_admin_qpairs": 0, 00:09:35.920 "current_io_qpairs": 0, 00:09:35.920 "pending_bdev_io": 0, 00:09:35.920 "completed_nvme_io": 520, 00:09:35.920 "transports": [ 00:09:35.920 { 00:09:35.920 "trtype": "TCP" 00:09:35.920 } 00:09:35.920 ] 00:09:35.920 }, 00:09:35.920 { 00:09:35.920 "name": "nvmf_tgt_poll_group_001", 00:09:35.920 "admin_qpairs": 1, 00:09:35.920 "io_qpairs": 223, 00:09:35.920 "current_admin_qpairs": 0, 00:09:35.920 "current_io_qpairs": 0, 00:09:35.920 "pending_bdev_io": 0, 00:09:35.920 "completed_nvme_io": 226, 00:09:35.920 "transports": [ 00:09:35.920 { 00:09:35.920 "trtype": "TCP" 00:09:35.920 } 00:09:35.920 ] 00:09:35.920 }, 00:09:35.920 { 00:09:35.920 "name": "nvmf_tgt_poll_group_002", 00:09:35.920 "admin_qpairs": 6, 00:09:35.920 "io_qpairs": 218, 00:09:35.920 "current_admin_qpairs": 0, 00:09:35.920 "current_io_qpairs": 0, 00:09:35.920 "pending_bdev_io": 0, 00:09:35.920 "completed_nvme_io": 219, 00:09:35.920 "transports": [ 00:09:35.920 { 00:09:35.920 "trtype": "TCP" 00:09:35.920 } 00:09:35.920 ] 00:09:35.920 }, 00:09:35.920 { 00:09:35.920 "name": "nvmf_tgt_poll_group_003", 00:09:35.920 "admin_qpairs": 0, 00:09:35.920 "io_qpairs": 224, 00:09:35.920 "current_admin_qpairs": 0, 00:09:35.920 "current_io_qpairs": 0, 00:09:35.920 "pending_bdev_io": 0, 00:09:35.920 "completed_nvme_io": 274, 00:09:35.920 "transports": [ 00:09:35.920 { 00:09:35.920 "trtype": "TCP" 00:09:35.920 } 00:09:35.920 ] 00:09:35.920 } 00:09:35.920 ] 00:09:35.920 }' 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:35.920 16:01:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:35.920 rmmod nvme_tcp 00:09:36.181 rmmod nvme_fabrics 00:09:36.181 rmmod nvme_keyring 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2133877 ']' 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2133877 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2133877 ']' 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2133877 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2133877 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2133877' 00:09:36.181 killing process with pid 2133877 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2133877 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2133877 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.181 16:01:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.727 16:01:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:38.727 00:09:38.727 real 0m37.309s 00:09:38.727 user 1m53.238s 00:09:38.727 sys 0m7.060s 00:09:38.727 16:01:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:38.727 16:01:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.727 ************************************ 00:09:38.727 END TEST nvmf_rpc 00:09:38.727 ************************************ 00:09:38.728 16:01:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:38.728 16:01:14 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:38.728 16:01:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:38.728 16:01:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.728 16:01:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:38.728 ************************************ 00:09:38.728 START TEST nvmf_invalid 00:09:38.728 ************************************ 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:38.728 * Looking for test storage... 00:09:38.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:38.728 16:01:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:45.318 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:45.318 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:45.318 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:45.318 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:45.318 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.319 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.319 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:45.319 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.319 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.319 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:45.319 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:45.319 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.319 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:45.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:09:45.580 00:09:45.580 --- 10.0.0.2 ping statistics --- 00:09:45.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.580 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.396 ms 00:09:45.580 00:09:45.580 --- 10.0.0.1 ping statistics --- 00:09:45.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.580 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2143722 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2143722 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2143722 ']' 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:45.580 16:01:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:45.840 [2024-07-15 16:01:21.447072] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:09:45.840 [2024-07-15 16:01:21.447143] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.840 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.840 [2024-07-15 16:01:21.515095] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.840 [2024-07-15 16:01:21.579990] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.840 [2024-07-15 16:01:21.580029] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.840 [2024-07-15 16:01:21.580037] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.840 [2024-07-15 16:01:21.580043] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.840 [2024-07-15 16:01:21.580049] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.840 [2024-07-15 16:01:21.580196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.840 [2024-07-15 16:01:21.580384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.840 [2024-07-15 16:01:21.580385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.840 [2024-07-15 16:01:21.580234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.410 16:01:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:46.410 16:01:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:46.410 16:01:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:46.410 16:01:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:46.410 16:01:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:46.670 16:01:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.670 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:46.670 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31476 00:09:46.670 [2024-07-15 16:01:22.410164] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:46.670 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:46.670 { 00:09:46.670 "nqn": "nqn.2016-06.io.spdk:cnode31476", 00:09:46.670 "tgt_name": "foobar", 00:09:46.670 "method": "nvmf_create_subsystem", 00:09:46.670 "req_id": 1 00:09:46.670 } 00:09:46.670 Got JSON-RPC error response 00:09:46.670 response: 00:09:46.670 { 00:09:46.670 "code": -32603, 00:09:46.670 "message": "Unable to find target foobar" 00:09:46.670 }' 00:09:46.670 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:46.670 { 00:09:46.670 "nqn": "nqn.2016-06.io.spdk:cnode31476", 00:09:46.670 "tgt_name": "foobar", 00:09:46.670 "method": "nvmf_create_subsystem", 00:09:46.670 "req_id": 1 00:09:46.670 } 00:09:46.670 Got JSON-RPC error response 00:09:46.670 response: 00:09:46.670 { 00:09:46.670 "code": -32603, 00:09:46.670 "message": "Unable to find target foobar" 00:09:46.670 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:46.670 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:46.670 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27030 00:09:46.930 [2024-07-15 16:01:22.586778] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27030: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:46.930 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:46.930 { 00:09:46.930 "nqn": "nqn.2016-06.io.spdk:cnode27030", 00:09:46.930 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:46.930 "method": "nvmf_create_subsystem", 00:09:46.930 "req_id": 1 00:09:46.930 } 00:09:46.930 Got JSON-RPC error response 00:09:46.930 response: 00:09:46.930 { 00:09:46.930 "code": -32602, 00:09:46.930 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:46.930 }' 00:09:46.930 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:46.930 { 00:09:46.930 "nqn": "nqn.2016-06.io.spdk:cnode27030", 00:09:46.930 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:46.930 "method": "nvmf_create_subsystem", 00:09:46.930 "req_id": 1 00:09:46.930 } 00:09:46.930 Got JSON-RPC error response 00:09:46.930 response: 00:09:46.930 { 00:09:46.930 "code": -32602, 00:09:46.930 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:46.930 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:46.930 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:46.930 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5609 00:09:46.930 [2024-07-15 16:01:22.763298] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5609: invalid model number 'SPDK_Controller' 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:47.190 { 00:09:47.190 "nqn": "nqn.2016-06.io.spdk:cnode5609", 00:09:47.190 "model_number": "SPDK_Controller\u001f", 00:09:47.190 "method": "nvmf_create_subsystem", 00:09:47.190 "req_id": 1 00:09:47.190 } 00:09:47.190 Got JSON-RPC error response 00:09:47.190 response: 00:09:47.190 { 00:09:47.190 "code": -32602, 00:09:47.190 "message": "Invalid MN SPDK_Controller\u001f" 00:09:47.190 }' 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:47.190 { 00:09:47.190 "nqn": "nqn.2016-06.io.spdk:cnode5609", 00:09:47.190 "model_number": "SPDK_Controller\u001f", 00:09:47.190 "method": "nvmf_create_subsystem", 00:09:47.190 "req_id": 1 00:09:47.190 } 00:09:47.190 Got JSON-RPC error response 00:09:47.190 response: 00:09:47.190 { 00:09:47.190 "code": -32602, 00:09:47.190 "message": "Invalid MN SPDK_Controller\u001f" 00:09:47.190 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.190 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ c == \- ]] 00:09:47.191 16:01:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'cXMf(i?5;^o0 /dev/null' 00:09:49.535 16:01:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.079 16:01:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:52.079 00:09:52.079 real 0m13.294s 00:09:52.079 user 0m19.310s 00:09:52.079 sys 0m6.218s 00:09:52.079 16:01:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:52.079 16:01:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:52.079 ************************************ 00:09:52.079 END TEST nvmf_invalid 00:09:52.079 ************************************ 00:09:52.079 16:01:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:52.079 16:01:27 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:52.079 16:01:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:52.079 16:01:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.079 16:01:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:52.079 ************************************ 00:09:52.079 START TEST nvmf_abort 00:09:52.079 ************************************ 00:09:52.079 16:01:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:52.079 * Looking for test storage... 00:09:52.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.079 16:01:27 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.079 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:52.079 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.079 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.079 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.079 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.079 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.079 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.079 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.079 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.079 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.079 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:52.080 16:01:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:58.703 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:58.703 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:58.703 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:58.703 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.703 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.704 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:58.704 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:58.704 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:58.704 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:58.704 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:58.704 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:58.704 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.704 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:58.704 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:58.704 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:58.704 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:58.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:09:58.965 00:09:58.965 --- 10.0.0.2 ping statistics --- 00:09:58.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.965 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:09:58.965 00:09:58.965 --- 10.0.0.1 ping statistics --- 00:09:58.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.965 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2148710 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2148710 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2148710 ']' 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.965 16:01:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:59.226 [2024-07-15 16:01:34.853646] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:09:59.226 [2024-07-15 16:01:34.853712] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.226 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.226 [2024-07-15 16:01:34.942908] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:59.226 [2024-07-15 16:01:35.038032] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.226 [2024-07-15 16:01:35.038091] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.226 [2024-07-15 16:01:35.038099] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.226 [2024-07-15 16:01:35.038106] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.226 [2024-07-15 16:01:35.038112] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.226 [2024-07-15 16:01:35.038187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.226 [2024-07-15 16:01:35.038417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.226 [2024-07-15 16:01:35.038418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.797 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:59.797 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:59.797 16:01:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.797 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:59.797 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.058 [2024-07-15 16:01:35.680614] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.058 Malloc0 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.058 Delay0 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.058 [2024-07-15 16:01:35.755531] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.058 16:01:35 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:00.058 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.319 [2024-07-15 16:01:35.917308] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:02.233 Initializing NVMe Controllers 00:10:02.233 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:02.233 controller IO queue size 128 less than required 00:10:02.233 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:02.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:02.233 Initialization complete. Launching workers. 00:10:02.233 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33863 00:10:02.233 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33924, failed to submit 62 00:10:02.233 success 33867, unsuccess 57, failed 0 00:10:02.233 16:01:37 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:02.233 16:01:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.233 16:01:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:02.233 16:01:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.233 16:01:37 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:02.233 16:01:37 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:02.233 16:01:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:02.233 16:01:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:02.233 16:01:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:02.233 16:01:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:02.233 16:01:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:02.233 16:01:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:02.233 rmmod nvme_tcp 00:10:02.233 rmmod nvme_fabrics 00:10:02.233 rmmod nvme_keyring 00:10:02.233 16:01:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:02.233 16:01:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:02.233 16:01:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:02.233 16:01:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2148710 ']' 00:10:02.233 16:01:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2148710 00:10:02.233 16:01:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2148710 ']' 00:10:02.233 16:01:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2148710 00:10:02.233 16:01:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:02.233 16:01:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:02.233 16:01:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2148710 00:10:02.492 16:01:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:02.492 16:01:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:02.492 16:01:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2148710' 00:10:02.492 killing process with pid 2148710 00:10:02.492 16:01:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2148710 00:10:02.492 16:01:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2148710 00:10:02.492 16:01:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:02.492 16:01:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:02.492 16:01:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:02.492 16:01:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:02.492 16:01:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:02.492 16:01:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.492 16:01:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:02.492 16:01:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.036 16:01:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:05.036 00:10:05.036 real 0m12.791s 00:10:05.036 user 0m13.335s 00:10:05.036 sys 0m6.199s 00:10:05.036 16:01:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.036 16:01:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:05.036 ************************************ 00:10:05.036 END TEST nvmf_abort 00:10:05.036 ************************************ 00:10:05.036 16:01:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:05.036 16:01:40 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:05.036 16:01:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:05.036 16:01:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.036 16:01:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:05.036 ************************************ 00:10:05.036 START TEST nvmf_ns_hotplug_stress 00:10:05.036 ************************************ 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:05.036 * Looking for test storage... 00:10:05.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:05.036 16:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:11.677 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:11.677 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.677 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:11.678 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:11.678 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:11.678 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:11.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:10:11.940 00:10:11.940 --- 10.0.0.2 ping statistics --- 00:10:11.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.940 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:10:11.940 00:10:11.940 --- 10.0.0.1 ping statistics --- 00:10:11.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.940 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2153599 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2153599 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2153599 ']' 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:11.940 16:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:11.940 [2024-07-15 16:01:47.703930] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:10:11.940 [2024-07-15 16:01:47.703993] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.940 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.201 [2024-07-15 16:01:47.792309] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:12.201 [2024-07-15 16:01:47.885833] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.201 [2024-07-15 16:01:47.885891] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.201 [2024-07-15 16:01:47.885900] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.201 [2024-07-15 16:01:47.885907] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.201 [2024-07-15 16:01:47.885913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.201 [2024-07-15 16:01:47.886048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.201 [2024-07-15 16:01:47.886219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.201 [2024-07-15 16:01:47.886400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.795 16:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:12.795 16:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:12.795 16:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:12.795 16:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:12.795 16:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.795 16:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.795 16:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:12.795 16:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:13.057 [2024-07-15 16:01:48.651679] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.057 16:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:13.057 16:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.318 [2024-07-15 16:01:48.993156] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.318 16:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:13.578 16:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:13.578 Malloc0 00:10:13.578 16:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:13.838 Delay0 00:10:13.839 16:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.099 16:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:14.099 NULL1 00:10:14.099 16:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:14.360 16:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:14.360 16:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2153977 00:10:14.360 16:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:14.360 16:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.360 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.360 16:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.621 16:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:14.621 16:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:14.882 [2024-07-15 16:01:50.485235] bdev.c:5033:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:10:14.882 true 00:10:14.882 16:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:14.882 16:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.882 16:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.143 16:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:15.143 16:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:15.143 true 00:10:15.404 16:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:15.404 16:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.404 16:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.664 16:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:15.664 16:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:15.664 true 00:10:15.664 16:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:15.664 16:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.924 16:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.924 16:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:15.924 16:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:16.184 true 00:10:16.184 16:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:16.184 16:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.445 16:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.445 16:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:16.445 16:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:16.706 true 00:10:16.706 16:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:16.706 16:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.650 Read completed with error (sct=0, sc=11) 00:10:17.650 16:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.650 16:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:17.650 16:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:17.910 true 00:10:17.910 16:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:17.910 16:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.170 16:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.170 16:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:18.170 16:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:18.430 true 00:10:18.430 16:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:18.430 16:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.430 16:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.690 16:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:18.690 16:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:18.964 true 00:10:18.964 16:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:18.964 16:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.964 16:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.234 16:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:19.234 16:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:19.234 true 00:10:19.234 16:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:19.234 16:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.495 16:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.756 16:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:19.756 16:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:19.756 true 00:10:19.756 16:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:19.756 16:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.700 16:01:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.700 16:01:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:20.700 16:01:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:20.960 true 00:10:20.960 16:01:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:20.960 16:01:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.913 16:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.913 16:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:21.913 16:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:22.174 true 00:10:22.174 16:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:22.174 16:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.435 16:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.435 16:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:22.435 16:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:22.696 true 00:10:22.696 16:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:22.696 16:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.696 16:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.956 16:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:22.956 16:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:22.956 true 00:10:23.218 16:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:23.218 16:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.218 16:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.479 16:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:23.479 16:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:23.479 true 00:10:23.479 16:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:23.479 16:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.740 16:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.001 16:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:24.001 16:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:24.001 true 00:10:24.001 16:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:24.001 16:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.262 16:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.262 16:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:24.262 16:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:24.523 true 00:10:24.523 16:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:24.523 16:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.785 16:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.785 16:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:24.785 16:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:25.046 true 00:10:25.046 16:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:25.046 16:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.988 16:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.988 16:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:25.988 16:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:26.248 true 00:10:26.248 16:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:26.249 16:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.249 16:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.510 16:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:26.510 16:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:26.772 true 00:10:26.772 16:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:26.772 16:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.772 16:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.032 16:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:27.033 16:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:27.293 true 00:10:27.293 16:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:27.293 16:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.293 16:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.554 16:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:27.554 16:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:27.554 true 00:10:27.814 16:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:27.814 16:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.814 16:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.074 16:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:28.074 16:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:28.074 true 00:10:28.074 16:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:28.074 16:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.016 16:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.275 16:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:29.275 16:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:29.275 true 00:10:29.275 16:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:29.275 16:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.534 16:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.793 16:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:29.793 16:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:29.793 true 00:10:29.793 16:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:29.793 16:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.053 16:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.314 16:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:30.314 16:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:30.314 true 00:10:30.314 16:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:30.314 16:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.574 16:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.834 16:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:30.834 16:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:30.834 true 00:10:30.834 16:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:30.834 16:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.096 16:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.357 16:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:31.357 16:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:31.357 true 00:10:31.357 16:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:31.357 16:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.618 16:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.618 16:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:31.618 16:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:31.878 true 00:10:31.878 16:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:31.878 16:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.139 16:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.139 16:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:32.139 16:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:32.399 true 00:10:32.399 16:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:32.399 16:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.660 16:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.660 16:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:32.660 16:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:32.920 true 00:10:32.920 16:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:32.920 16:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.920 16:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.181 16:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:33.181 16:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:33.181 true 00:10:33.442 16:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:33.442 16:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.384 16:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.384 16:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:34.384 16:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:34.645 true 00:10:34.646 16:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:34.646 16:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.646 16:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.906 16:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:34.906 16:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:34.906 true 00:10:34.906 16:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:34.906 16:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.166 16:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.427 16:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:35.427 16:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:35.427 true 00:10:35.427 16:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:35.427 16:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.710 16:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.012 16:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:36.012 16:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:36.012 true 00:10:36.012 16:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:36.012 16:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.273 16:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.273 16:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:36.273 16:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:36.534 true 00:10:36.534 16:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:36.534 16:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.795 16:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.795 16:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:36.795 16:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:37.056 true 00:10:37.056 16:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:37.056 16:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.316 16:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.316 16:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:37.316 16:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:37.577 true 00:10:37.577 16:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:37.577 16:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.837 16:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.837 16:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:37.837 16:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:38.099 true 00:10:38.099 16:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:38.099 16:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.360 16:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.360 16:02:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:38.360 16:02:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:38.620 true 00:10:38.620 16:02:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:38.620 16:02:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.562 16:02:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.562 16:02:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:39.562 16:02:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:39.823 true 00:10:39.823 16:02:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:39.823 16:02:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.084 16:02:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.084 16:02:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:40.084 16:02:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:40.344 true 00:10:40.344 16:02:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:40.344 16:02:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.605 16:02:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.605 16:02:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:40.605 16:02:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:40.865 true 00:10:40.865 16:02:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:40.865 16:02:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.126 16:02:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.126 16:02:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:41.126 16:02:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:41.387 true 00:10:41.387 16:02:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:41.387 16:02:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.647 16:02:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.647 16:02:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:41.648 16:02:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:41.908 true 00:10:41.908 16:02:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:41.908 16:02:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.167 16:02:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.167 16:02:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:42.167 16:02:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:42.428 true 00:10:42.428 16:02:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:42.428 16:02:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.428 16:02:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.689 16:02:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:42.689 16:02:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:42.949 true 00:10:42.949 16:02:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:42.949 16:02:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.949 16:02:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.210 16:02:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:43.210 16:02:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:43.471 true 00:10:43.471 16:02:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:43.471 16:02:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.471 16:02:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.731 16:02:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:43.731 16:02:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:43.992 true 00:10:43.992 16:02:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:43.992 16:02:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.992 16:02:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.252 16:02:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:44.252 16:02:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:44.252 true 00:10:44.513 16:02:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:44.513 16:02:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.513 16:02:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.774 Initializing NVMe Controllers 00:10:44.774 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:44.774 Controller IO queue size 128, less than required. 00:10:44.774 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:44.774 Controller IO queue size 128, less than required. 00:10:44.774 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:44.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:44.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:44.774 Initialization complete. Launching workers. 00:10:44.774 ======================================================== 00:10:44.774 Latency(us) 00:10:44.774 Device Information : IOPS MiB/s Average min max 00:10:44.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 210.86 0.10 133342.89 2267.22 1291250.94 00:10:44.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6886.15 3.36 18527.63 2331.98 502890.51 00:10:44.774 ======================================================== 00:10:44.774 Total : 7097.01 3.47 21938.88 2267.22 1291250.94 00:10:44.774 00:10:44.774 16:02:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:44.774 16:02:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:44.774 true 00:10:44.774 16:02:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2153977 00:10:44.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2153977) - No such process 00:10:44.774 16:02:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2153977 00:10:44.774 16:02:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.034 16:02:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:45.293 16:02:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:45.293 16:02:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:45.293 16:02:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:45.293 16:02:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:45.293 16:02:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:45.293 null0 00:10:45.293 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:45.293 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:45.293 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:45.553 null1 00:10:45.553 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:45.553 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:45.553 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:45.553 null2 00:10:45.812 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:45.812 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:45.812 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:45.812 null3 00:10:45.812 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:45.812 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:45.812 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:46.072 null4 00:10:46.072 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:46.072 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:46.072 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:46.072 null5 00:10:46.072 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:46.072 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:46.072 16:02:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:46.333 null6 00:10:46.333 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:46.333 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:46.333 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:46.594 null7 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:46.594 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2160619 2160622 2160623 2160626 2160629 2160632 2160635 2160637 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:46.595 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:46.856 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.117 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:47.377 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.377 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.377 16:02:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:47.377 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.377 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:47.377 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:47.377 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:47.377 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:47.377 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:47.377 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:47.377 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:47.377 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.377 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.377 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:47.637 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.897 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.156 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:48.417 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.417 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.417 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.417 16:02:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:48.417 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:48.417 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:48.417 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:48.417 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:48.417 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:48.417 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.417 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.417 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:48.417 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:48.417 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.417 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.417 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:48.417 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:48.677 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.936 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:48.936 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:48.936 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.936 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.936 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:48.936 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.936 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.937 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:48.937 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.937 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:48.937 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.937 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.937 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:48.937 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.937 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.937 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:48.937 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.937 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.937 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:48.937 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.937 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.937 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.196 16:02:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:49.196 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.196 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.196 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.457 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.717 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.978 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:49.978 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.978 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.978 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.978 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.978 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:49.978 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.978 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.978 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.978 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.978 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:49.978 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.978 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:50.238 rmmod nvme_tcp 00:10:50.238 rmmod nvme_fabrics 00:10:50.238 rmmod nvme_keyring 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2153599 ']' 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2153599 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2153599 ']' 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2153599 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2153599 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2153599' 00:10:50.238 killing process with pid 2153599 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2153599 00:10:50.238 16:02:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2153599 00:10:50.498 16:02:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:50.498 16:02:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:50.498 16:02:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:50.498 16:02:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:50.498 16:02:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:50.498 16:02:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.498 16:02:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.498 16:02:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.468 16:02:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:52.468 00:10:52.468 real 0m47.773s 00:10:52.468 user 3m11.831s 00:10:52.468 sys 0m15.234s 00:10:52.468 16:02:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.468 16:02:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.468 ************************************ 00:10:52.468 END TEST nvmf_ns_hotplug_stress 00:10:52.468 ************************************ 00:10:52.468 16:02:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:52.468 16:02:28 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:52.468 16:02:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:52.468 16:02:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.468 16:02:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:52.468 ************************************ 00:10:52.468 START TEST nvmf_connect_stress 00:10:52.468 ************************************ 00:10:52.468 16:02:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:52.729 * Looking for test storage... 00:10:52.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:52.729 16:02:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.375 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:59.375 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:59.375 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:59.375 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:59.375 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:59.375 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:59.375 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:59.375 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:59.375 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:59.375 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:59.375 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:59.375 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:59.375 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:59.376 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:59.376 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:59.376 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:59.376 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:59.376 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:59.637 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:59.637 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:59.637 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:59.637 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:59.637 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:59.637 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:59.637 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:59.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:10:59.637 00:10:59.637 --- 10.0.0.2 ping statistics --- 00:10:59.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.637 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:10:59.637 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:59.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:10:59.637 00:10:59.637 --- 10.0.0.1 ping statistics --- 00:10:59.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.637 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:10:59.637 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.637 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:59.637 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:59.637 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.637 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:59.637 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:59.637 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.637 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:59.637 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:59.898 16:02:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:59.898 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:59.898 16:02:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:59.898 16:02:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.898 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2165648 00:10:59.898 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2165648 00:10:59.898 16:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:59.898 16:02:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2165648 ']' 00:10:59.898 16:02:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.898 16:02:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:59.898 16:02:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.898 16:02:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:59.898 16:02:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.898 [2024-07-15 16:02:35.544114] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:10:59.898 [2024-07-15 16:02:35.544191] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.898 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.898 [2024-07-15 16:02:35.632184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:59.898 [2024-07-15 16:02:35.727132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.898 [2024-07-15 16:02:35.727190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.898 [2024-07-15 16:02:35.727198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.898 [2024-07-15 16:02:35.727205] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.898 [2024-07-15 16:02:35.727211] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.898 [2024-07-15 16:02:35.727344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.898 [2024-07-15 16:02:35.727643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.899 [2024-07-15 16:02:35.727645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.842 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.843 [2024-07-15 16:02:36.361355] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.843 [2024-07-15 16:02:36.385251] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.843 NULL1 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2165995 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.843 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.104 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.104 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:01.104 16:02:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.104 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.104 16:02:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.365 16:02:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.365 16:02:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:01.365 16:02:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.365 16:02:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.365 16:02:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.937 16:02:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.937 16:02:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:01.937 16:02:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.937 16:02:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.937 16:02:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.197 16:02:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.197 16:02:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:02.197 16:02:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.197 16:02:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.197 16:02:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.458 16:02:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.458 16:02:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:02.458 16:02:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.458 16:02:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.458 16:02:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.718 16:02:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.718 16:02:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:02.718 16:02:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.718 16:02:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.718 16:02:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.980 16:02:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.980 16:02:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:02.980 16:02:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.980 16:02:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.980 16:02:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.552 16:02:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.552 16:02:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:03.552 16:02:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.552 16:02:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.552 16:02:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.813 16:02:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.813 16:02:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:03.813 16:02:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.813 16:02:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.813 16:02:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.075 16:02:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.075 16:02:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:04.075 16:02:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.075 16:02:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.075 16:02:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.335 16:02:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.335 16:02:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:04.335 16:02:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.335 16:02:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.335 16:02:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.596 16:02:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.596 16:02:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:04.596 16:02:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.596 16:02:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.596 16:02:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.167 16:02:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.167 16:02:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:05.167 16:02:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.167 16:02:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.167 16:02:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.428 16:02:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.428 16:02:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:05.428 16:02:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.428 16:02:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.428 16:02:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.704 16:02:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.704 16:02:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:05.704 16:02:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.704 16:02:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.704 16:02:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.964 16:02:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.964 16:02:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:05.964 16:02:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.964 16:02:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.964 16:02:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.225 16:02:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.225 16:02:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:06.225 16:02:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.225 16:02:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.225 16:02:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.797 16:02:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.797 16:02:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:06.797 16:02:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.797 16:02:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.797 16:02:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.058 16:02:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.058 16:02:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:07.058 16:02:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.058 16:02:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.058 16:02:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.319 16:02:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.319 16:02:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:07.319 16:02:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.319 16:02:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.319 16:02:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.579 16:02:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.579 16:02:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:07.579 16:02:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.579 16:02:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.579 16:02:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.839 16:02:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.839 16:02:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:07.839 16:02:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.839 16:02:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.839 16:02:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.408 16:02:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.408 16:02:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:08.408 16:02:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.408 16:02:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.408 16:02:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.668 16:02:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.669 16:02:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:08.669 16:02:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.669 16:02:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.669 16:02:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.929 16:02:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.929 16:02:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:08.929 16:02:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.929 16:02:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.929 16:02:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.190 16:02:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.190 16:02:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:09.190 16:02:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.190 16:02:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.190 16:02:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.451 16:02:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.451 16:02:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:09.451 16:02:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.451 16:02:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.451 16:02:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.031 16:02:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.031 16:02:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:10.031 16:02:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.031 16:02:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.031 16:02:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.291 16:02:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.292 16:02:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:10.292 16:02:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.292 16:02:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.292 16:02:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.556 16:02:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.556 16:02:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:10.556 16:02:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.556 16:02:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.556 16:02:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.817 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2165995 00:11:10.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2165995) - No such process 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2165995 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:10.817 rmmod nvme_tcp 00:11:10.817 rmmod nvme_fabrics 00:11:10.817 rmmod nvme_keyring 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2165648 ']' 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2165648 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2165648 ']' 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2165648 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:10.817 16:02:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:11.078 16:02:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2165648 00:11:11.079 16:02:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:11.079 16:02:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:11.079 16:02:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2165648' 00:11:11.079 killing process with pid 2165648 00:11:11.079 16:02:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2165648 00:11:11.079 16:02:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2165648 00:11:11.079 16:02:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:11.079 16:02:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:11.079 16:02:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:11.079 16:02:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:11.079 16:02:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:11.079 16:02:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.079 16:02:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:11.079 16:02:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.623 16:02:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:13.623 00:11:13.623 real 0m20.651s 00:11:13.623 user 0m41.829s 00:11:13.623 sys 0m8.597s 00:11:13.623 16:02:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:13.623 16:02:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.623 ************************************ 00:11:13.623 END TEST nvmf_connect_stress 00:11:13.623 ************************************ 00:11:13.623 16:02:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:13.623 16:02:48 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:13.623 16:02:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:13.623 16:02:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.623 16:02:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:13.623 ************************************ 00:11:13.623 START TEST nvmf_fused_ordering 00:11:13.623 ************************************ 00:11:13.623 16:02:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:13.623 * Looking for test storage... 00:11:13.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:13.623 16:02:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:20.234 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:20.234 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:20.234 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:20.234 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:20.234 16:02:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:20.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:11:20.496 00:11:20.496 --- 10.0.0.2 ping statistics --- 00:11:20.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.496 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:20.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.387 ms 00:11:20.496 00:11:20.496 --- 10.0.0.1 ping statistics --- 00:11:20.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.496 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2172027 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2172027 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2172027 ']' 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:20.496 16:02:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:20.496 [2024-07-15 16:02:56.232823] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:11:20.496 [2024-07-15 16:02:56.232894] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.496 EAL: No free 2048 kB hugepages reported on node 1 00:11:20.496 [2024-07-15 16:02:56.323056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.757 [2024-07-15 16:02:56.416183] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.757 [2024-07-15 16:02:56.416242] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.757 [2024-07-15 16:02:56.416250] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.757 [2024-07-15 16:02:56.416257] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.757 [2024-07-15 16:02:56.416263] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.757 [2024-07-15 16:02:56.416298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.330 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:21.330 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:21.330 16:02:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:21.330 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:21.330 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.330 16:02:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.331 [2024-07-15 16:02:57.072441] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.331 [2024-07-15 16:02:57.096642] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.331 NULL1 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.331 16:02:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:21.331 [2024-07-15 16:02:57.166395] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:11:21.331 [2024-07-15 16:02:57.166436] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172373 ] 00:11:21.592 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.853 Attached to nqn.2016-06.io.spdk:cnode1 00:11:21.853 Namespace ID: 1 size: 1GB 00:11:21.853 fused_ordering(0) 00:11:21.853 fused_ordering(1) 00:11:21.853 fused_ordering(2) 00:11:21.853 fused_ordering(3) 00:11:21.853 fused_ordering(4) 00:11:21.853 fused_ordering(5) 00:11:21.853 fused_ordering(6) 00:11:21.853 fused_ordering(7) 00:11:21.853 fused_ordering(8) 00:11:21.853 fused_ordering(9) 00:11:21.853 fused_ordering(10) 00:11:21.853 fused_ordering(11) 00:11:21.853 fused_ordering(12) 00:11:21.853 fused_ordering(13) 00:11:21.853 fused_ordering(14) 00:11:21.853 fused_ordering(15) 00:11:21.853 fused_ordering(16) 00:11:21.853 fused_ordering(17) 00:11:21.853 fused_ordering(18) 00:11:21.853 fused_ordering(19) 00:11:21.853 fused_ordering(20) 00:11:21.853 fused_ordering(21) 00:11:21.853 fused_ordering(22) 00:11:21.853 fused_ordering(23) 00:11:21.853 fused_ordering(24) 00:11:21.853 fused_ordering(25) 00:11:21.853 fused_ordering(26) 00:11:21.853 fused_ordering(27) 00:11:21.853 fused_ordering(28) 00:11:21.853 fused_ordering(29) 00:11:21.853 fused_ordering(30) 00:11:21.853 fused_ordering(31) 00:11:21.853 fused_ordering(32) 00:11:21.853 fused_ordering(33) 00:11:21.853 fused_ordering(34) 00:11:21.853 fused_ordering(35) 00:11:21.853 fused_ordering(36) 00:11:21.853 fused_ordering(37) 00:11:21.853 fused_ordering(38) 00:11:21.853 fused_ordering(39) 00:11:21.853 fused_ordering(40) 00:11:21.853 fused_ordering(41) 00:11:21.853 fused_ordering(42) 00:11:21.853 fused_ordering(43) 00:11:21.853 fused_ordering(44) 00:11:21.853 fused_ordering(45) 00:11:21.853 fused_ordering(46) 00:11:21.853 fused_ordering(47) 00:11:21.853 fused_ordering(48) 00:11:21.853 fused_ordering(49) 00:11:21.853 fused_ordering(50) 00:11:21.853 fused_ordering(51) 00:11:21.853 fused_ordering(52) 00:11:21.853 fused_ordering(53) 00:11:21.853 fused_ordering(54) 00:11:21.853 fused_ordering(55) 00:11:21.853 fused_ordering(56) 00:11:21.853 fused_ordering(57) 00:11:21.853 fused_ordering(58) 00:11:21.853 fused_ordering(59) 00:11:21.853 fused_ordering(60) 00:11:21.853 fused_ordering(61) 00:11:21.853 fused_ordering(62) 00:11:21.853 fused_ordering(63) 00:11:21.853 fused_ordering(64) 00:11:21.853 fused_ordering(65) 00:11:21.853 fused_ordering(66) 00:11:21.853 fused_ordering(67) 00:11:21.853 fused_ordering(68) 00:11:21.853 fused_ordering(69) 00:11:21.853 fused_ordering(70) 00:11:21.853 fused_ordering(71) 00:11:21.853 fused_ordering(72) 00:11:21.853 fused_ordering(73) 00:11:21.853 fused_ordering(74) 00:11:21.853 fused_ordering(75) 00:11:21.853 fused_ordering(76) 00:11:21.853 fused_ordering(77) 00:11:21.853 fused_ordering(78) 00:11:21.853 fused_ordering(79) 00:11:21.853 fused_ordering(80) 00:11:21.853 fused_ordering(81) 00:11:21.853 fused_ordering(82) 00:11:21.853 fused_ordering(83) 00:11:21.853 fused_ordering(84) 00:11:21.853 fused_ordering(85) 00:11:21.853 fused_ordering(86) 00:11:21.853 fused_ordering(87) 00:11:21.853 fused_ordering(88) 00:11:21.853 fused_ordering(89) 00:11:21.853 fused_ordering(90) 00:11:21.853 fused_ordering(91) 00:11:21.853 fused_ordering(92) 00:11:21.853 fused_ordering(93) 00:11:21.853 fused_ordering(94) 00:11:21.853 fused_ordering(95) 00:11:21.853 fused_ordering(96) 00:11:21.853 fused_ordering(97) 00:11:21.853 fused_ordering(98) 00:11:21.853 fused_ordering(99) 00:11:21.853 fused_ordering(100) 00:11:21.853 fused_ordering(101) 00:11:21.853 fused_ordering(102) 00:11:21.853 fused_ordering(103) 00:11:21.853 fused_ordering(104) 00:11:21.853 fused_ordering(105) 00:11:21.853 fused_ordering(106) 00:11:21.853 fused_ordering(107) 00:11:21.853 fused_ordering(108) 00:11:21.853 fused_ordering(109) 00:11:21.853 fused_ordering(110) 00:11:21.853 fused_ordering(111) 00:11:21.853 fused_ordering(112) 00:11:21.853 fused_ordering(113) 00:11:21.853 fused_ordering(114) 00:11:21.853 fused_ordering(115) 00:11:21.853 fused_ordering(116) 00:11:21.853 fused_ordering(117) 00:11:21.853 fused_ordering(118) 00:11:21.853 fused_ordering(119) 00:11:21.853 fused_ordering(120) 00:11:21.853 fused_ordering(121) 00:11:21.853 fused_ordering(122) 00:11:21.853 fused_ordering(123) 00:11:21.853 fused_ordering(124) 00:11:21.853 fused_ordering(125) 00:11:21.853 fused_ordering(126) 00:11:21.853 fused_ordering(127) 00:11:21.853 fused_ordering(128) 00:11:21.853 fused_ordering(129) 00:11:21.853 fused_ordering(130) 00:11:21.853 fused_ordering(131) 00:11:21.853 fused_ordering(132) 00:11:21.853 fused_ordering(133) 00:11:21.853 fused_ordering(134) 00:11:21.853 fused_ordering(135) 00:11:21.853 fused_ordering(136) 00:11:21.853 fused_ordering(137) 00:11:21.853 fused_ordering(138) 00:11:21.853 fused_ordering(139) 00:11:21.853 fused_ordering(140) 00:11:21.853 fused_ordering(141) 00:11:21.853 fused_ordering(142) 00:11:21.853 fused_ordering(143) 00:11:21.853 fused_ordering(144) 00:11:21.853 fused_ordering(145) 00:11:21.853 fused_ordering(146) 00:11:21.853 fused_ordering(147) 00:11:21.853 fused_ordering(148) 00:11:21.853 fused_ordering(149) 00:11:21.853 fused_ordering(150) 00:11:21.853 fused_ordering(151) 00:11:21.853 fused_ordering(152) 00:11:21.853 fused_ordering(153) 00:11:21.853 fused_ordering(154) 00:11:21.853 fused_ordering(155) 00:11:21.853 fused_ordering(156) 00:11:21.853 fused_ordering(157) 00:11:21.853 fused_ordering(158) 00:11:21.853 fused_ordering(159) 00:11:21.853 fused_ordering(160) 00:11:21.853 fused_ordering(161) 00:11:21.853 fused_ordering(162) 00:11:21.853 fused_ordering(163) 00:11:21.853 fused_ordering(164) 00:11:21.853 fused_ordering(165) 00:11:21.853 fused_ordering(166) 00:11:21.853 fused_ordering(167) 00:11:21.853 fused_ordering(168) 00:11:21.853 fused_ordering(169) 00:11:21.853 fused_ordering(170) 00:11:21.853 fused_ordering(171) 00:11:21.853 fused_ordering(172) 00:11:21.853 fused_ordering(173) 00:11:21.853 fused_ordering(174) 00:11:21.853 fused_ordering(175) 00:11:21.853 fused_ordering(176) 00:11:21.853 fused_ordering(177) 00:11:21.853 fused_ordering(178) 00:11:21.853 fused_ordering(179) 00:11:21.853 fused_ordering(180) 00:11:21.853 fused_ordering(181) 00:11:21.853 fused_ordering(182) 00:11:21.853 fused_ordering(183) 00:11:21.853 fused_ordering(184) 00:11:21.853 fused_ordering(185) 00:11:21.853 fused_ordering(186) 00:11:21.853 fused_ordering(187) 00:11:21.853 fused_ordering(188) 00:11:21.853 fused_ordering(189) 00:11:21.853 fused_ordering(190) 00:11:21.853 fused_ordering(191) 00:11:21.853 fused_ordering(192) 00:11:21.853 fused_ordering(193) 00:11:21.853 fused_ordering(194) 00:11:21.853 fused_ordering(195) 00:11:21.853 fused_ordering(196) 00:11:21.853 fused_ordering(197) 00:11:21.853 fused_ordering(198) 00:11:21.853 fused_ordering(199) 00:11:21.853 fused_ordering(200) 00:11:21.853 fused_ordering(201) 00:11:21.853 fused_ordering(202) 00:11:21.853 fused_ordering(203) 00:11:21.853 fused_ordering(204) 00:11:21.853 fused_ordering(205) 00:11:22.425 fused_ordering(206) 00:11:22.425 fused_ordering(207) 00:11:22.425 fused_ordering(208) 00:11:22.425 fused_ordering(209) 00:11:22.425 fused_ordering(210) 00:11:22.425 fused_ordering(211) 00:11:22.425 fused_ordering(212) 00:11:22.425 fused_ordering(213) 00:11:22.425 fused_ordering(214) 00:11:22.425 fused_ordering(215) 00:11:22.425 fused_ordering(216) 00:11:22.425 fused_ordering(217) 00:11:22.425 fused_ordering(218) 00:11:22.425 fused_ordering(219) 00:11:22.425 fused_ordering(220) 00:11:22.425 fused_ordering(221) 00:11:22.425 fused_ordering(222) 00:11:22.425 fused_ordering(223) 00:11:22.425 fused_ordering(224) 00:11:22.425 fused_ordering(225) 00:11:22.425 fused_ordering(226) 00:11:22.425 fused_ordering(227) 00:11:22.425 fused_ordering(228) 00:11:22.425 fused_ordering(229) 00:11:22.425 fused_ordering(230) 00:11:22.425 fused_ordering(231) 00:11:22.425 fused_ordering(232) 00:11:22.425 fused_ordering(233) 00:11:22.425 fused_ordering(234) 00:11:22.425 fused_ordering(235) 00:11:22.425 fused_ordering(236) 00:11:22.425 fused_ordering(237) 00:11:22.425 fused_ordering(238) 00:11:22.425 fused_ordering(239) 00:11:22.425 fused_ordering(240) 00:11:22.425 fused_ordering(241) 00:11:22.425 fused_ordering(242) 00:11:22.425 fused_ordering(243) 00:11:22.425 fused_ordering(244) 00:11:22.425 fused_ordering(245) 00:11:22.425 fused_ordering(246) 00:11:22.425 fused_ordering(247) 00:11:22.425 fused_ordering(248) 00:11:22.425 fused_ordering(249) 00:11:22.425 fused_ordering(250) 00:11:22.425 fused_ordering(251) 00:11:22.425 fused_ordering(252) 00:11:22.425 fused_ordering(253) 00:11:22.425 fused_ordering(254) 00:11:22.425 fused_ordering(255) 00:11:22.425 fused_ordering(256) 00:11:22.425 fused_ordering(257) 00:11:22.425 fused_ordering(258) 00:11:22.425 fused_ordering(259) 00:11:22.425 fused_ordering(260) 00:11:22.425 fused_ordering(261) 00:11:22.425 fused_ordering(262) 00:11:22.425 fused_ordering(263) 00:11:22.425 fused_ordering(264) 00:11:22.425 fused_ordering(265) 00:11:22.425 fused_ordering(266) 00:11:22.425 fused_ordering(267) 00:11:22.425 fused_ordering(268) 00:11:22.425 fused_ordering(269) 00:11:22.425 fused_ordering(270) 00:11:22.425 fused_ordering(271) 00:11:22.425 fused_ordering(272) 00:11:22.425 fused_ordering(273) 00:11:22.425 fused_ordering(274) 00:11:22.425 fused_ordering(275) 00:11:22.425 fused_ordering(276) 00:11:22.425 fused_ordering(277) 00:11:22.425 fused_ordering(278) 00:11:22.425 fused_ordering(279) 00:11:22.425 fused_ordering(280) 00:11:22.425 fused_ordering(281) 00:11:22.425 fused_ordering(282) 00:11:22.425 fused_ordering(283) 00:11:22.425 fused_ordering(284) 00:11:22.425 fused_ordering(285) 00:11:22.425 fused_ordering(286) 00:11:22.425 fused_ordering(287) 00:11:22.425 fused_ordering(288) 00:11:22.425 fused_ordering(289) 00:11:22.425 fused_ordering(290) 00:11:22.425 fused_ordering(291) 00:11:22.425 fused_ordering(292) 00:11:22.425 fused_ordering(293) 00:11:22.425 fused_ordering(294) 00:11:22.425 fused_ordering(295) 00:11:22.425 fused_ordering(296) 00:11:22.425 fused_ordering(297) 00:11:22.425 fused_ordering(298) 00:11:22.425 fused_ordering(299) 00:11:22.425 fused_ordering(300) 00:11:22.425 fused_ordering(301) 00:11:22.425 fused_ordering(302) 00:11:22.425 fused_ordering(303) 00:11:22.425 fused_ordering(304) 00:11:22.425 fused_ordering(305) 00:11:22.425 fused_ordering(306) 00:11:22.425 fused_ordering(307) 00:11:22.425 fused_ordering(308) 00:11:22.425 fused_ordering(309) 00:11:22.425 fused_ordering(310) 00:11:22.425 fused_ordering(311) 00:11:22.425 fused_ordering(312) 00:11:22.425 fused_ordering(313) 00:11:22.425 fused_ordering(314) 00:11:22.425 fused_ordering(315) 00:11:22.425 fused_ordering(316) 00:11:22.425 fused_ordering(317) 00:11:22.425 fused_ordering(318) 00:11:22.425 fused_ordering(319) 00:11:22.425 fused_ordering(320) 00:11:22.425 fused_ordering(321) 00:11:22.425 fused_ordering(322) 00:11:22.425 fused_ordering(323) 00:11:22.425 fused_ordering(324) 00:11:22.425 fused_ordering(325) 00:11:22.425 fused_ordering(326) 00:11:22.425 fused_ordering(327) 00:11:22.425 fused_ordering(328) 00:11:22.425 fused_ordering(329) 00:11:22.425 fused_ordering(330) 00:11:22.425 fused_ordering(331) 00:11:22.425 fused_ordering(332) 00:11:22.425 fused_ordering(333) 00:11:22.425 fused_ordering(334) 00:11:22.425 fused_ordering(335) 00:11:22.425 fused_ordering(336) 00:11:22.425 fused_ordering(337) 00:11:22.425 fused_ordering(338) 00:11:22.425 fused_ordering(339) 00:11:22.425 fused_ordering(340) 00:11:22.425 fused_ordering(341) 00:11:22.425 fused_ordering(342) 00:11:22.425 fused_ordering(343) 00:11:22.425 fused_ordering(344) 00:11:22.425 fused_ordering(345) 00:11:22.425 fused_ordering(346) 00:11:22.425 fused_ordering(347) 00:11:22.425 fused_ordering(348) 00:11:22.425 fused_ordering(349) 00:11:22.425 fused_ordering(350) 00:11:22.425 fused_ordering(351) 00:11:22.425 fused_ordering(352) 00:11:22.425 fused_ordering(353) 00:11:22.425 fused_ordering(354) 00:11:22.425 fused_ordering(355) 00:11:22.425 fused_ordering(356) 00:11:22.425 fused_ordering(357) 00:11:22.425 fused_ordering(358) 00:11:22.425 fused_ordering(359) 00:11:22.425 fused_ordering(360) 00:11:22.425 fused_ordering(361) 00:11:22.425 fused_ordering(362) 00:11:22.425 fused_ordering(363) 00:11:22.425 fused_ordering(364) 00:11:22.425 fused_ordering(365) 00:11:22.425 fused_ordering(366) 00:11:22.425 fused_ordering(367) 00:11:22.425 fused_ordering(368) 00:11:22.425 fused_ordering(369) 00:11:22.425 fused_ordering(370) 00:11:22.425 fused_ordering(371) 00:11:22.425 fused_ordering(372) 00:11:22.425 fused_ordering(373) 00:11:22.425 fused_ordering(374) 00:11:22.425 fused_ordering(375) 00:11:22.425 fused_ordering(376) 00:11:22.425 fused_ordering(377) 00:11:22.425 fused_ordering(378) 00:11:22.425 fused_ordering(379) 00:11:22.425 fused_ordering(380) 00:11:22.425 fused_ordering(381) 00:11:22.425 fused_ordering(382) 00:11:22.425 fused_ordering(383) 00:11:22.425 fused_ordering(384) 00:11:22.425 fused_ordering(385) 00:11:22.425 fused_ordering(386) 00:11:22.425 fused_ordering(387) 00:11:22.425 fused_ordering(388) 00:11:22.425 fused_ordering(389) 00:11:22.425 fused_ordering(390) 00:11:22.425 fused_ordering(391) 00:11:22.425 fused_ordering(392) 00:11:22.425 fused_ordering(393) 00:11:22.425 fused_ordering(394) 00:11:22.425 fused_ordering(395) 00:11:22.425 fused_ordering(396) 00:11:22.425 fused_ordering(397) 00:11:22.425 fused_ordering(398) 00:11:22.425 fused_ordering(399) 00:11:22.425 fused_ordering(400) 00:11:22.425 fused_ordering(401) 00:11:22.425 fused_ordering(402) 00:11:22.425 fused_ordering(403) 00:11:22.425 fused_ordering(404) 00:11:22.425 fused_ordering(405) 00:11:22.425 fused_ordering(406) 00:11:22.425 fused_ordering(407) 00:11:22.425 fused_ordering(408) 00:11:22.425 fused_ordering(409) 00:11:22.425 fused_ordering(410) 00:11:22.997 fused_ordering(411) 00:11:22.997 fused_ordering(412) 00:11:22.997 fused_ordering(413) 00:11:22.997 fused_ordering(414) 00:11:22.997 fused_ordering(415) 00:11:22.997 fused_ordering(416) 00:11:22.997 fused_ordering(417) 00:11:22.997 fused_ordering(418) 00:11:22.997 fused_ordering(419) 00:11:22.997 fused_ordering(420) 00:11:22.997 fused_ordering(421) 00:11:22.997 fused_ordering(422) 00:11:22.997 fused_ordering(423) 00:11:22.997 fused_ordering(424) 00:11:22.997 fused_ordering(425) 00:11:22.997 fused_ordering(426) 00:11:22.997 fused_ordering(427) 00:11:22.997 fused_ordering(428) 00:11:22.997 fused_ordering(429) 00:11:22.997 fused_ordering(430) 00:11:22.997 fused_ordering(431) 00:11:22.997 fused_ordering(432) 00:11:22.997 fused_ordering(433) 00:11:22.997 fused_ordering(434) 00:11:22.997 fused_ordering(435) 00:11:22.997 fused_ordering(436) 00:11:22.997 fused_ordering(437) 00:11:22.997 fused_ordering(438) 00:11:22.997 fused_ordering(439) 00:11:22.997 fused_ordering(440) 00:11:22.997 fused_ordering(441) 00:11:22.997 fused_ordering(442) 00:11:22.997 fused_ordering(443) 00:11:22.997 fused_ordering(444) 00:11:22.997 fused_ordering(445) 00:11:22.997 fused_ordering(446) 00:11:22.997 fused_ordering(447) 00:11:22.997 fused_ordering(448) 00:11:22.997 fused_ordering(449) 00:11:22.997 fused_ordering(450) 00:11:22.997 fused_ordering(451) 00:11:22.997 fused_ordering(452) 00:11:22.997 fused_ordering(453) 00:11:22.997 fused_ordering(454) 00:11:22.997 fused_ordering(455) 00:11:22.997 fused_ordering(456) 00:11:22.997 fused_ordering(457) 00:11:22.997 fused_ordering(458) 00:11:22.997 fused_ordering(459) 00:11:22.997 fused_ordering(460) 00:11:22.997 fused_ordering(461) 00:11:22.997 fused_ordering(462) 00:11:22.997 fused_ordering(463) 00:11:22.997 fused_ordering(464) 00:11:22.997 fused_ordering(465) 00:11:22.997 fused_ordering(466) 00:11:22.997 fused_ordering(467) 00:11:22.997 fused_ordering(468) 00:11:22.997 fused_ordering(469) 00:11:22.997 fused_ordering(470) 00:11:22.997 fused_ordering(471) 00:11:22.997 fused_ordering(472) 00:11:22.997 fused_ordering(473) 00:11:22.997 fused_ordering(474) 00:11:22.997 fused_ordering(475) 00:11:22.997 fused_ordering(476) 00:11:22.997 fused_ordering(477) 00:11:22.997 fused_ordering(478) 00:11:22.997 fused_ordering(479) 00:11:22.997 fused_ordering(480) 00:11:22.997 fused_ordering(481) 00:11:22.997 fused_ordering(482) 00:11:22.997 fused_ordering(483) 00:11:22.997 fused_ordering(484) 00:11:22.997 fused_ordering(485) 00:11:22.997 fused_ordering(486) 00:11:22.997 fused_ordering(487) 00:11:22.997 fused_ordering(488) 00:11:22.997 fused_ordering(489) 00:11:22.997 fused_ordering(490) 00:11:22.997 fused_ordering(491) 00:11:22.997 fused_ordering(492) 00:11:22.997 fused_ordering(493) 00:11:22.997 fused_ordering(494) 00:11:22.997 fused_ordering(495) 00:11:22.997 fused_ordering(496) 00:11:22.997 fused_ordering(497) 00:11:22.997 fused_ordering(498) 00:11:22.997 fused_ordering(499) 00:11:22.997 fused_ordering(500) 00:11:22.997 fused_ordering(501) 00:11:22.997 fused_ordering(502) 00:11:22.997 fused_ordering(503) 00:11:22.997 fused_ordering(504) 00:11:22.997 fused_ordering(505) 00:11:22.997 fused_ordering(506) 00:11:22.997 fused_ordering(507) 00:11:22.997 fused_ordering(508) 00:11:22.997 fused_ordering(509) 00:11:22.997 fused_ordering(510) 00:11:22.997 fused_ordering(511) 00:11:22.997 fused_ordering(512) 00:11:22.997 fused_ordering(513) 00:11:22.997 fused_ordering(514) 00:11:22.997 fused_ordering(515) 00:11:22.997 fused_ordering(516) 00:11:22.997 fused_ordering(517) 00:11:22.998 fused_ordering(518) 00:11:22.998 fused_ordering(519) 00:11:22.998 fused_ordering(520) 00:11:22.998 fused_ordering(521) 00:11:22.998 fused_ordering(522) 00:11:22.998 fused_ordering(523) 00:11:22.998 fused_ordering(524) 00:11:22.998 fused_ordering(525) 00:11:22.998 fused_ordering(526) 00:11:22.998 fused_ordering(527) 00:11:22.998 fused_ordering(528) 00:11:22.998 fused_ordering(529) 00:11:22.998 fused_ordering(530) 00:11:22.998 fused_ordering(531) 00:11:22.998 fused_ordering(532) 00:11:22.998 fused_ordering(533) 00:11:22.998 fused_ordering(534) 00:11:22.998 fused_ordering(535) 00:11:22.998 fused_ordering(536) 00:11:22.998 fused_ordering(537) 00:11:22.998 fused_ordering(538) 00:11:22.998 fused_ordering(539) 00:11:22.998 fused_ordering(540) 00:11:22.998 fused_ordering(541) 00:11:22.998 fused_ordering(542) 00:11:22.998 fused_ordering(543) 00:11:22.998 fused_ordering(544) 00:11:22.998 fused_ordering(545) 00:11:22.998 fused_ordering(546) 00:11:22.998 fused_ordering(547) 00:11:22.998 fused_ordering(548) 00:11:22.998 fused_ordering(549) 00:11:22.998 fused_ordering(550) 00:11:22.998 fused_ordering(551) 00:11:22.998 fused_ordering(552) 00:11:22.998 fused_ordering(553) 00:11:22.998 fused_ordering(554) 00:11:22.998 fused_ordering(555) 00:11:22.998 fused_ordering(556) 00:11:22.998 fused_ordering(557) 00:11:22.998 fused_ordering(558) 00:11:22.998 fused_ordering(559) 00:11:22.998 fused_ordering(560) 00:11:22.998 fused_ordering(561) 00:11:22.998 fused_ordering(562) 00:11:22.998 fused_ordering(563) 00:11:22.998 fused_ordering(564) 00:11:22.998 fused_ordering(565) 00:11:22.998 fused_ordering(566) 00:11:22.998 fused_ordering(567) 00:11:22.998 fused_ordering(568) 00:11:22.998 fused_ordering(569) 00:11:22.998 fused_ordering(570) 00:11:22.998 fused_ordering(571) 00:11:22.998 fused_ordering(572) 00:11:22.998 fused_ordering(573) 00:11:22.998 fused_ordering(574) 00:11:22.998 fused_ordering(575) 00:11:22.998 fused_ordering(576) 00:11:22.998 fused_ordering(577) 00:11:22.998 fused_ordering(578) 00:11:22.998 fused_ordering(579) 00:11:22.998 fused_ordering(580) 00:11:22.998 fused_ordering(581) 00:11:22.998 fused_ordering(582) 00:11:22.998 fused_ordering(583) 00:11:22.998 fused_ordering(584) 00:11:22.998 fused_ordering(585) 00:11:22.998 fused_ordering(586) 00:11:22.998 fused_ordering(587) 00:11:22.998 fused_ordering(588) 00:11:22.998 fused_ordering(589) 00:11:22.998 fused_ordering(590) 00:11:22.998 fused_ordering(591) 00:11:22.998 fused_ordering(592) 00:11:22.998 fused_ordering(593) 00:11:22.998 fused_ordering(594) 00:11:22.998 fused_ordering(595) 00:11:22.998 fused_ordering(596) 00:11:22.998 fused_ordering(597) 00:11:22.998 fused_ordering(598) 00:11:22.998 fused_ordering(599) 00:11:22.998 fused_ordering(600) 00:11:22.998 fused_ordering(601) 00:11:22.998 fused_ordering(602) 00:11:22.998 fused_ordering(603) 00:11:22.998 fused_ordering(604) 00:11:22.998 fused_ordering(605) 00:11:22.998 fused_ordering(606) 00:11:22.998 fused_ordering(607) 00:11:22.998 fused_ordering(608) 00:11:22.998 fused_ordering(609) 00:11:22.998 fused_ordering(610) 00:11:22.998 fused_ordering(611) 00:11:22.998 fused_ordering(612) 00:11:22.998 fused_ordering(613) 00:11:22.998 fused_ordering(614) 00:11:22.998 fused_ordering(615) 00:11:23.569 fused_ordering(616) 00:11:23.569 fused_ordering(617) 00:11:23.569 fused_ordering(618) 00:11:23.569 fused_ordering(619) 00:11:23.569 fused_ordering(620) 00:11:23.569 fused_ordering(621) 00:11:23.569 fused_ordering(622) 00:11:23.569 fused_ordering(623) 00:11:23.569 fused_ordering(624) 00:11:23.569 fused_ordering(625) 00:11:23.569 fused_ordering(626) 00:11:23.569 fused_ordering(627) 00:11:23.569 fused_ordering(628) 00:11:23.569 fused_ordering(629) 00:11:23.569 fused_ordering(630) 00:11:23.569 fused_ordering(631) 00:11:23.569 fused_ordering(632) 00:11:23.569 fused_ordering(633) 00:11:23.569 fused_ordering(634) 00:11:23.569 fused_ordering(635) 00:11:23.569 fused_ordering(636) 00:11:23.569 fused_ordering(637) 00:11:23.569 fused_ordering(638) 00:11:23.569 fused_ordering(639) 00:11:23.569 fused_ordering(640) 00:11:23.569 fused_ordering(641) 00:11:23.569 fused_ordering(642) 00:11:23.569 fused_ordering(643) 00:11:23.569 fused_ordering(644) 00:11:23.569 fused_ordering(645) 00:11:23.569 fused_ordering(646) 00:11:23.569 fused_ordering(647) 00:11:23.569 fused_ordering(648) 00:11:23.569 fused_ordering(649) 00:11:23.569 fused_ordering(650) 00:11:23.569 fused_ordering(651) 00:11:23.569 fused_ordering(652) 00:11:23.569 fused_ordering(653) 00:11:23.569 fused_ordering(654) 00:11:23.569 fused_ordering(655) 00:11:23.569 fused_ordering(656) 00:11:23.569 fused_ordering(657) 00:11:23.569 fused_ordering(658) 00:11:23.569 fused_ordering(659) 00:11:23.569 fused_ordering(660) 00:11:23.569 fused_ordering(661) 00:11:23.569 fused_ordering(662) 00:11:23.569 fused_ordering(663) 00:11:23.569 fused_ordering(664) 00:11:23.569 fused_ordering(665) 00:11:23.569 fused_ordering(666) 00:11:23.569 fused_ordering(667) 00:11:23.569 fused_ordering(668) 00:11:23.569 fused_ordering(669) 00:11:23.569 fused_ordering(670) 00:11:23.569 fused_ordering(671) 00:11:23.569 fused_ordering(672) 00:11:23.569 fused_ordering(673) 00:11:23.569 fused_ordering(674) 00:11:23.569 fused_ordering(675) 00:11:23.569 fused_ordering(676) 00:11:23.569 fused_ordering(677) 00:11:23.569 fused_ordering(678) 00:11:23.569 fused_ordering(679) 00:11:23.569 fused_ordering(680) 00:11:23.569 fused_ordering(681) 00:11:23.569 fused_ordering(682) 00:11:23.569 fused_ordering(683) 00:11:23.569 fused_ordering(684) 00:11:23.569 fused_ordering(685) 00:11:23.569 fused_ordering(686) 00:11:23.569 fused_ordering(687) 00:11:23.569 fused_ordering(688) 00:11:23.569 fused_ordering(689) 00:11:23.569 fused_ordering(690) 00:11:23.569 fused_ordering(691) 00:11:23.569 fused_ordering(692) 00:11:23.569 fused_ordering(693) 00:11:23.569 fused_ordering(694) 00:11:23.569 fused_ordering(695) 00:11:23.569 fused_ordering(696) 00:11:23.569 fused_ordering(697) 00:11:23.569 fused_ordering(698) 00:11:23.569 fused_ordering(699) 00:11:23.569 fused_ordering(700) 00:11:23.569 fused_ordering(701) 00:11:23.569 fused_ordering(702) 00:11:23.569 fused_ordering(703) 00:11:23.569 fused_ordering(704) 00:11:23.569 fused_ordering(705) 00:11:23.569 fused_ordering(706) 00:11:23.569 fused_ordering(707) 00:11:23.569 fused_ordering(708) 00:11:23.569 fused_ordering(709) 00:11:23.569 fused_ordering(710) 00:11:23.569 fused_ordering(711) 00:11:23.569 fused_ordering(712) 00:11:23.569 fused_ordering(713) 00:11:23.569 fused_ordering(714) 00:11:23.569 fused_ordering(715) 00:11:23.569 fused_ordering(716) 00:11:23.569 fused_ordering(717) 00:11:23.569 fused_ordering(718) 00:11:23.569 fused_ordering(719) 00:11:23.569 fused_ordering(720) 00:11:23.569 fused_ordering(721) 00:11:23.569 fused_ordering(722) 00:11:23.569 fused_ordering(723) 00:11:23.569 fused_ordering(724) 00:11:23.569 fused_ordering(725) 00:11:23.569 fused_ordering(726) 00:11:23.569 fused_ordering(727) 00:11:23.570 fused_ordering(728) 00:11:23.570 fused_ordering(729) 00:11:23.570 fused_ordering(730) 00:11:23.570 fused_ordering(731) 00:11:23.570 fused_ordering(732) 00:11:23.570 fused_ordering(733) 00:11:23.570 fused_ordering(734) 00:11:23.570 fused_ordering(735) 00:11:23.570 fused_ordering(736) 00:11:23.570 fused_ordering(737) 00:11:23.570 fused_ordering(738) 00:11:23.570 fused_ordering(739) 00:11:23.570 fused_ordering(740) 00:11:23.570 fused_ordering(741) 00:11:23.570 fused_ordering(742) 00:11:23.570 fused_ordering(743) 00:11:23.570 fused_ordering(744) 00:11:23.570 fused_ordering(745) 00:11:23.570 fused_ordering(746) 00:11:23.570 fused_ordering(747) 00:11:23.570 fused_ordering(748) 00:11:23.570 fused_ordering(749) 00:11:23.570 fused_ordering(750) 00:11:23.570 fused_ordering(751) 00:11:23.570 fused_ordering(752) 00:11:23.570 fused_ordering(753) 00:11:23.570 fused_ordering(754) 00:11:23.570 fused_ordering(755) 00:11:23.570 fused_ordering(756) 00:11:23.570 fused_ordering(757) 00:11:23.570 fused_ordering(758) 00:11:23.570 fused_ordering(759) 00:11:23.570 fused_ordering(760) 00:11:23.570 fused_ordering(761) 00:11:23.570 fused_ordering(762) 00:11:23.570 fused_ordering(763) 00:11:23.570 fused_ordering(764) 00:11:23.570 fused_ordering(765) 00:11:23.570 fused_ordering(766) 00:11:23.570 fused_ordering(767) 00:11:23.570 fused_ordering(768) 00:11:23.570 fused_ordering(769) 00:11:23.570 fused_ordering(770) 00:11:23.570 fused_ordering(771) 00:11:23.570 fused_ordering(772) 00:11:23.570 fused_ordering(773) 00:11:23.570 fused_ordering(774) 00:11:23.570 fused_ordering(775) 00:11:23.570 fused_ordering(776) 00:11:23.570 fused_ordering(777) 00:11:23.570 fused_ordering(778) 00:11:23.570 fused_ordering(779) 00:11:23.570 fused_ordering(780) 00:11:23.570 fused_ordering(781) 00:11:23.570 fused_ordering(782) 00:11:23.570 fused_ordering(783) 00:11:23.570 fused_ordering(784) 00:11:23.570 fused_ordering(785) 00:11:23.570 fused_ordering(786) 00:11:23.570 fused_ordering(787) 00:11:23.570 fused_ordering(788) 00:11:23.570 fused_ordering(789) 00:11:23.570 fused_ordering(790) 00:11:23.570 fused_ordering(791) 00:11:23.570 fused_ordering(792) 00:11:23.570 fused_ordering(793) 00:11:23.570 fused_ordering(794) 00:11:23.570 fused_ordering(795) 00:11:23.570 fused_ordering(796) 00:11:23.570 fused_ordering(797) 00:11:23.570 fused_ordering(798) 00:11:23.570 fused_ordering(799) 00:11:23.570 fused_ordering(800) 00:11:23.570 fused_ordering(801) 00:11:23.570 fused_ordering(802) 00:11:23.570 fused_ordering(803) 00:11:23.570 fused_ordering(804) 00:11:23.570 fused_ordering(805) 00:11:23.570 fused_ordering(806) 00:11:23.570 fused_ordering(807) 00:11:23.570 fused_ordering(808) 00:11:23.570 fused_ordering(809) 00:11:23.570 fused_ordering(810) 00:11:23.570 fused_ordering(811) 00:11:23.570 fused_ordering(812) 00:11:23.570 fused_ordering(813) 00:11:23.570 fused_ordering(814) 00:11:23.570 fused_ordering(815) 00:11:23.570 fused_ordering(816) 00:11:23.570 fused_ordering(817) 00:11:23.570 fused_ordering(818) 00:11:23.570 fused_ordering(819) 00:11:23.570 fused_ordering(820) 00:11:24.151 fused_ordering(821) 00:11:24.151 fused_ordering(822) 00:11:24.151 fused_ordering(823) 00:11:24.151 fused_ordering(824) 00:11:24.151 fused_ordering(825) 00:11:24.151 fused_ordering(826) 00:11:24.151 fused_ordering(827) 00:11:24.151 fused_ordering(828) 00:11:24.151 fused_ordering(829) 00:11:24.151 fused_ordering(830) 00:11:24.151 fused_ordering(831) 00:11:24.151 fused_ordering(832) 00:11:24.151 fused_ordering(833) 00:11:24.151 fused_ordering(834) 00:11:24.151 fused_ordering(835) 00:11:24.151 fused_ordering(836) 00:11:24.151 fused_ordering(837) 00:11:24.151 fused_ordering(838) 00:11:24.151 fused_ordering(839) 00:11:24.151 fused_ordering(840) 00:11:24.151 fused_ordering(841) 00:11:24.151 fused_ordering(842) 00:11:24.151 fused_ordering(843) 00:11:24.151 fused_ordering(844) 00:11:24.151 fused_ordering(845) 00:11:24.151 fused_ordering(846) 00:11:24.151 fused_ordering(847) 00:11:24.151 fused_ordering(848) 00:11:24.151 fused_ordering(849) 00:11:24.151 fused_ordering(850) 00:11:24.151 fused_ordering(851) 00:11:24.151 fused_ordering(852) 00:11:24.151 fused_ordering(853) 00:11:24.151 fused_ordering(854) 00:11:24.151 fused_ordering(855) 00:11:24.151 fused_ordering(856) 00:11:24.151 fused_ordering(857) 00:11:24.151 fused_ordering(858) 00:11:24.151 fused_ordering(859) 00:11:24.151 fused_ordering(860) 00:11:24.151 fused_ordering(861) 00:11:24.151 fused_ordering(862) 00:11:24.151 fused_ordering(863) 00:11:24.151 fused_ordering(864) 00:11:24.151 fused_ordering(865) 00:11:24.151 fused_ordering(866) 00:11:24.151 fused_ordering(867) 00:11:24.151 fused_ordering(868) 00:11:24.151 fused_ordering(869) 00:11:24.151 fused_ordering(870) 00:11:24.151 fused_ordering(871) 00:11:24.151 fused_ordering(872) 00:11:24.151 fused_ordering(873) 00:11:24.151 fused_ordering(874) 00:11:24.151 fused_ordering(875) 00:11:24.151 fused_ordering(876) 00:11:24.151 fused_ordering(877) 00:11:24.151 fused_ordering(878) 00:11:24.151 fused_ordering(879) 00:11:24.151 fused_ordering(880) 00:11:24.151 fused_ordering(881) 00:11:24.151 fused_ordering(882) 00:11:24.151 fused_ordering(883) 00:11:24.151 fused_ordering(884) 00:11:24.151 fused_ordering(885) 00:11:24.151 fused_ordering(886) 00:11:24.151 fused_ordering(887) 00:11:24.151 fused_ordering(888) 00:11:24.151 fused_ordering(889) 00:11:24.151 fused_ordering(890) 00:11:24.151 fused_ordering(891) 00:11:24.151 fused_ordering(892) 00:11:24.151 fused_ordering(893) 00:11:24.151 fused_ordering(894) 00:11:24.151 fused_ordering(895) 00:11:24.151 fused_ordering(896) 00:11:24.151 fused_ordering(897) 00:11:24.151 fused_ordering(898) 00:11:24.151 fused_ordering(899) 00:11:24.151 fused_ordering(900) 00:11:24.151 fused_ordering(901) 00:11:24.151 fused_ordering(902) 00:11:24.151 fused_ordering(903) 00:11:24.151 fused_ordering(904) 00:11:24.151 fused_ordering(905) 00:11:24.151 fused_ordering(906) 00:11:24.151 fused_ordering(907) 00:11:24.151 fused_ordering(908) 00:11:24.151 fused_ordering(909) 00:11:24.151 fused_ordering(910) 00:11:24.151 fused_ordering(911) 00:11:24.151 fused_ordering(912) 00:11:24.151 fused_ordering(913) 00:11:24.151 fused_ordering(914) 00:11:24.151 fused_ordering(915) 00:11:24.151 fused_ordering(916) 00:11:24.151 fused_ordering(917) 00:11:24.151 fused_ordering(918) 00:11:24.151 fused_ordering(919) 00:11:24.151 fused_ordering(920) 00:11:24.151 fused_ordering(921) 00:11:24.151 fused_ordering(922) 00:11:24.151 fused_ordering(923) 00:11:24.151 fused_ordering(924) 00:11:24.151 fused_ordering(925) 00:11:24.151 fused_ordering(926) 00:11:24.151 fused_ordering(927) 00:11:24.151 fused_ordering(928) 00:11:24.151 fused_ordering(929) 00:11:24.151 fused_ordering(930) 00:11:24.151 fused_ordering(931) 00:11:24.151 fused_ordering(932) 00:11:24.151 fused_ordering(933) 00:11:24.151 fused_ordering(934) 00:11:24.151 fused_ordering(935) 00:11:24.151 fused_ordering(936) 00:11:24.151 fused_ordering(937) 00:11:24.151 fused_ordering(938) 00:11:24.151 fused_ordering(939) 00:11:24.151 fused_ordering(940) 00:11:24.151 fused_ordering(941) 00:11:24.151 fused_ordering(942) 00:11:24.151 fused_ordering(943) 00:11:24.151 fused_ordering(944) 00:11:24.151 fused_ordering(945) 00:11:24.151 fused_ordering(946) 00:11:24.151 fused_ordering(947) 00:11:24.151 fused_ordering(948) 00:11:24.151 fused_ordering(949) 00:11:24.151 fused_ordering(950) 00:11:24.151 fused_ordering(951) 00:11:24.151 fused_ordering(952) 00:11:24.151 fused_ordering(953) 00:11:24.151 fused_ordering(954) 00:11:24.151 fused_ordering(955) 00:11:24.151 fused_ordering(956) 00:11:24.151 fused_ordering(957) 00:11:24.151 fused_ordering(958) 00:11:24.151 fused_ordering(959) 00:11:24.151 fused_ordering(960) 00:11:24.151 fused_ordering(961) 00:11:24.151 fused_ordering(962) 00:11:24.151 fused_ordering(963) 00:11:24.151 fused_ordering(964) 00:11:24.151 fused_ordering(965) 00:11:24.151 fused_ordering(966) 00:11:24.151 fused_ordering(967) 00:11:24.151 fused_ordering(968) 00:11:24.151 fused_ordering(969) 00:11:24.151 fused_ordering(970) 00:11:24.151 fused_ordering(971) 00:11:24.151 fused_ordering(972) 00:11:24.151 fused_ordering(973) 00:11:24.151 fused_ordering(974) 00:11:24.151 fused_ordering(975) 00:11:24.151 fused_ordering(976) 00:11:24.151 fused_ordering(977) 00:11:24.151 fused_ordering(978) 00:11:24.151 fused_ordering(979) 00:11:24.151 fused_ordering(980) 00:11:24.151 fused_ordering(981) 00:11:24.151 fused_ordering(982) 00:11:24.151 fused_ordering(983) 00:11:24.151 fused_ordering(984) 00:11:24.151 fused_ordering(985) 00:11:24.151 fused_ordering(986) 00:11:24.151 fused_ordering(987) 00:11:24.151 fused_ordering(988) 00:11:24.151 fused_ordering(989) 00:11:24.151 fused_ordering(990) 00:11:24.151 fused_ordering(991) 00:11:24.151 fused_ordering(992) 00:11:24.151 fused_ordering(993) 00:11:24.151 fused_ordering(994) 00:11:24.151 fused_ordering(995) 00:11:24.151 fused_ordering(996) 00:11:24.151 fused_ordering(997) 00:11:24.151 fused_ordering(998) 00:11:24.151 fused_ordering(999) 00:11:24.151 fused_ordering(1000) 00:11:24.151 fused_ordering(1001) 00:11:24.151 fused_ordering(1002) 00:11:24.151 fused_ordering(1003) 00:11:24.151 fused_ordering(1004) 00:11:24.151 fused_ordering(1005) 00:11:24.151 fused_ordering(1006) 00:11:24.151 fused_ordering(1007) 00:11:24.151 fused_ordering(1008) 00:11:24.151 fused_ordering(1009) 00:11:24.151 fused_ordering(1010) 00:11:24.151 fused_ordering(1011) 00:11:24.151 fused_ordering(1012) 00:11:24.151 fused_ordering(1013) 00:11:24.151 fused_ordering(1014) 00:11:24.151 fused_ordering(1015) 00:11:24.151 fused_ordering(1016) 00:11:24.151 fused_ordering(1017) 00:11:24.151 fused_ordering(1018) 00:11:24.151 fused_ordering(1019) 00:11:24.151 fused_ordering(1020) 00:11:24.151 fused_ordering(1021) 00:11:24.151 fused_ordering(1022) 00:11:24.151 fused_ordering(1023) 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:24.151 rmmod nvme_tcp 00:11:24.151 rmmod nvme_fabrics 00:11:24.151 rmmod nvme_keyring 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2172027 ']' 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2172027 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2172027 ']' 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2172027 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:24.151 16:02:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2172027 00:11:24.412 16:03:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:24.412 16:03:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:24.412 16:03:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2172027' 00:11:24.412 killing process with pid 2172027 00:11:24.412 16:03:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2172027 00:11:24.412 16:03:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2172027 00:11:24.412 16:03:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:24.412 16:03:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:24.412 16:03:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:24.412 16:03:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:24.412 16:03:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:24.412 16:03:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.412 16:03:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:24.412 16:03:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.959 16:03:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:26.959 00:11:26.959 real 0m13.215s 00:11:26.959 user 0m7.218s 00:11:26.959 sys 0m7.059s 00:11:26.959 16:03:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:26.959 16:03:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:26.959 ************************************ 00:11:26.959 END TEST nvmf_fused_ordering 00:11:26.959 ************************************ 00:11:26.959 16:03:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:26.960 16:03:02 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:26.960 16:03:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:26.960 16:03:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.960 16:03:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:26.960 ************************************ 00:11:26.960 START TEST nvmf_delete_subsystem 00:11:26.960 ************************************ 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:26.960 * Looking for test storage... 00:11:26.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:26.960 16:03:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:33.560 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:33.560 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:33.560 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:33.560 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.560 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:33.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:11:33.822 00:11:33.822 --- 10.0.0.2 ping statistics --- 00:11:33.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.822 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:11:33.822 00:11:33.822 --- 10.0.0.1 ping statistics --- 00:11:33.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.822 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2177370 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2177370 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2177370 ']' 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:33.822 16:03:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.082 [2024-07-15 16:03:09.685078] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:11:34.082 [2024-07-15 16:03:09.685169] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.082 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.083 [2024-07-15 16:03:09.759181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:34.083 [2024-07-15 16:03:09.832961] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.083 [2024-07-15 16:03:09.833002] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.083 [2024-07-15 16:03:09.833010] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.083 [2024-07-15 16:03:09.833017] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.083 [2024-07-15 16:03:09.833022] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.083 [2024-07-15 16:03:09.833170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.083 [2024-07-15 16:03:09.833199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.653 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:34.653 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:34.653 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:34.653 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:34.653 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.653 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.653 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:34.653 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.653 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.914 [2024-07-15 16:03:10.500819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.914 [2024-07-15 16:03:10.524994] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.914 NULL1 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.914 Delay0 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2177720 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:34.914 16:03:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:34.914 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.914 [2024-07-15 16:03:10.631674] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:36.826 16:03:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.826 16:03:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.826 16:03:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 starting I/O failed: -6 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 starting I/O failed: -6 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 starting I/O failed: -6 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 starting I/O failed: -6 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 starting I/O failed: -6 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 starting I/O failed: -6 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 starting I/O failed: -6 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 starting I/O failed: -6 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 starting I/O failed: -6 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 starting I/O failed: -6 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 starting I/O failed: -6 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 [2024-07-15 16:03:12.716787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ba5c0 is same with the state(5) to be set 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Write completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 Read completed with error (sct=0, sc=8) 00:11:37.087 starting I/O failed: -6 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 starting I/O failed: -6 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 starting I/O failed: -6 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 starting I/O failed: -6 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 starting I/O failed: -6 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 starting I/O failed: -6 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 starting I/O failed: -6 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 starting I/O failed: -6 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 starting I/O failed: -6 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 starting I/O failed: -6 00:11:37.088 [2024-07-15 16:03:12.719978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7facf4000c00 is same with the state(5) to be set 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Write completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:37.088 Read completed with error (sct=0, sc=8) 00:11:38.030 [2024-07-15 16:03:13.690395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bbac0 is same with the state(5) to be set 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 [2024-07-15 16:03:13.720751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ba3e0 is same with the state(5) to be set 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 [2024-07-15 16:03:13.720914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ba7a0 is same with the state(5) to be set 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 [2024-07-15 16:03:13.722226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7facf400d740 is same with the state(5) to be set 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Write completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.030 Read completed with error (sct=0, sc=8) 00:11:38.031 Read completed with error (sct=0, sc=8) 00:11:38.031 Read completed with error (sct=0, sc=8) 00:11:38.031 Read completed with error (sct=0, sc=8) 00:11:38.031 Read completed with error (sct=0, sc=8) 00:11:38.031 Read completed with error (sct=0, sc=8) 00:11:38.031 Read completed with error (sct=0, sc=8) 00:11:38.031 [2024-07-15 16:03:13.722307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7facf400cfe0 is same with the state(5) to be set 00:11:38.031 Initializing NVMe Controllers 00:11:38.031 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:38.031 Controller IO queue size 128, less than required. 00:11:38.031 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:38.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:38.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:38.031 Initialization complete. Launching workers. 00:11:38.031 ======================================================== 00:11:38.031 Latency(us) 00:11:38.031 Device Information : IOPS MiB/s Average min max 00:11:38.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.28 0.08 892994.31 209.74 1044045.00 00:11:38.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.86 0.07 1004464.49 289.78 2002267.45 00:11:38.031 ======================================================== 00:11:38.031 Total : 323.14 0.16 945723.65 209.74 2002267.45 00:11:38.031 00:11:38.031 [2024-07-15 16:03:13.722783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7bbac0 (9): Bad file descriptor 00:11:38.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:38.031 16:03:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.031 16:03:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:38.031 16:03:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2177720 00:11:38.031 16:03:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2177720 00:11:38.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2177720) - No such process 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2177720 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2177720 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2177720 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.602 [2024-07-15 16:03:14.252847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2178627 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2178627 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:38.602 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:38.602 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.602 [2024-07-15 16:03:14.323269] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:39.173 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:39.173 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2178627 00:11:39.173 16:03:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:39.744 16:03:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:39.744 16:03:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2178627 00:11:39.744 16:03:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:40.008 16:03:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:40.008 16:03:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2178627 00:11:40.008 16:03:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:40.633 16:03:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:40.633 16:03:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2178627 00:11:40.633 16:03:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:41.205 16:03:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:41.205 16:03:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2178627 00:11:41.205 16:03:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:41.466 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:41.466 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2178627 00:11:41.466 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:41.727 Initializing NVMe Controllers 00:11:41.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:41.727 Controller IO queue size 128, less than required. 00:11:41.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:41.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:41.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:41.727 Initialization complete. Launching workers. 00:11:41.727 ======================================================== 00:11:41.727 Latency(us) 00:11:41.727 Device Information : IOPS MiB/s Average min max 00:11:41.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002388.48 1000241.01 1008131.17 00:11:41.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002931.97 1000189.32 1009979.45 00:11:41.727 ======================================================== 00:11:41.727 Total : 256.00 0.12 1002660.22 1000189.32 1009979.45 00:11:41.727 00:11:41.988 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:41.988 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2178627 00:11:41.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2178627) - No such process 00:11:41.988 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2178627 00:11:41.988 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:41.988 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:41.988 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:41.988 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:41.989 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:41.989 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:41.989 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:41.989 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:41.989 rmmod nvme_tcp 00:11:41.989 rmmod nvme_fabrics 00:11:42.250 rmmod nvme_keyring 00:11:42.250 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:42.250 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:42.250 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:42.250 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2177370 ']' 00:11:42.250 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2177370 00:11:42.250 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2177370 ']' 00:11:42.250 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2177370 00:11:42.250 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:42.250 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:42.250 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2177370 00:11:42.250 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:42.250 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:42.250 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2177370' 00:11:42.250 killing process with pid 2177370 00:11:42.250 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2177370 00:11:42.250 16:03:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2177370 00:11:42.250 16:03:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:42.250 16:03:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:42.250 16:03:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:42.250 16:03:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:42.250 16:03:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:42.250 16:03:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.250 16:03:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.250 16:03:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.800 16:03:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:44.800 00:11:44.800 real 0m17.870s 00:11:44.800 user 0m30.483s 00:11:44.800 sys 0m6.287s 00:11:44.800 16:03:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:44.800 16:03:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.800 ************************************ 00:11:44.800 END TEST nvmf_delete_subsystem 00:11:44.800 ************************************ 00:11:44.800 16:03:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:44.800 16:03:20 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:44.800 16:03:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:44.800 16:03:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.800 16:03:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:44.800 ************************************ 00:11:44.800 START TEST nvmf_ns_masking 00:11:44.800 ************************************ 00:11:44.800 16:03:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:44.800 * Looking for test storage... 00:11:44.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.800 16:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.800 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:44.800 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.800 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.800 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.800 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.800 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.800 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.800 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.800 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.800 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.800 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.800 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:44.800 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:44.800 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e2dc182a-1172-4658-9342-df36dd9f3d7e 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=37621c07-45ec-4069-8074-a562550b32c6 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=541d8f0b-c777-4e8a-b3ff-33362a8096d0 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:44.801 16:03:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:51.387 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:51.387 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:51.387 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:51.387 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:51.387 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:51.388 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:51.388 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:51.388 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.388 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.388 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.388 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:51.388 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.388 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.388 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:51.388 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.388 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.388 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:51.388 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:51.388 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.388 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.648 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.648 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.648 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:51.648 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.648 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.648 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.648 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:51.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:11:51.648 00:11:51.648 --- 10.0.0.2 ping statistics --- 00:11:51.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.648 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:11:51.648 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:11:51.649 00:11:51.649 --- 10.0.0.1 ping statistics --- 00:11:51.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.649 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:11:51.649 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.649 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:51.649 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:51.649 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.649 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:51.649 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:51.649 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.649 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:51.649 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:51.909 16:03:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:51.909 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:51.909 16:03:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:51.909 16:03:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:51.909 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2183415 00:11:51.909 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2183415 00:11:51.909 16:03:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:51.909 16:03:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2183415 ']' 00:11:51.909 16:03:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.909 16:03:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:51.909 16:03:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.909 16:03:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:51.909 16:03:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:51.909 [2024-07-15 16:03:27.574082] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:11:51.909 [2024-07-15 16:03:27.574175] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.909 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.909 [2024-07-15 16:03:27.645698] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.909 [2024-07-15 16:03:27.719174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.909 [2024-07-15 16:03:27.719213] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.909 [2024-07-15 16:03:27.719221] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.909 [2024-07-15 16:03:27.719228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.909 [2024-07-15 16:03:27.719233] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.909 [2024-07-15 16:03:27.719256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.850 16:03:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.850 16:03:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:52.850 16:03:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:52.850 16:03:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:52.850 16:03:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:52.850 16:03:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.850 16:03:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:52.850 [2024-07-15 16:03:28.510492] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.850 16:03:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:52.850 16:03:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:52.850 16:03:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:53.110 Malloc1 00:11:53.110 16:03:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:53.110 Malloc2 00:11:53.110 16:03:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.372 16:03:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:53.633 16:03:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.633 [2024-07-15 16:03:29.365834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.633 16:03:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:53.633 16:03:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 541d8f0b-c777-4e8a-b3ff-33362a8096d0 -a 10.0.0.2 -s 4420 -i 4 00:11:53.895 16:03:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:53.895 16:03:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:53.895 16:03:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.895 16:03:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:53.895 16:03:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:55.807 16:03:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:55.807 16:03:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:55.807 16:03:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.807 16:03:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:55.807 16:03:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.807 16:03:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:55.807 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:55.807 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:56.068 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:56.068 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:56.068 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:56.068 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:56.068 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.068 [ 0]:0x1 00:11:56.068 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:56.068 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.068 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0c42c2f2b8f640918eacaf181de73c7f 00:11:56.068 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0c42c2f2b8f640918eacaf181de73c7f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.068 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:56.068 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:56.068 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.068 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:56.068 [ 0]:0x1 00:11:56.329 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:56.329 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.329 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0c42c2f2b8f640918eacaf181de73c7f 00:11:56.329 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0c42c2f2b8f640918eacaf181de73c7f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.329 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:56.329 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.329 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:56.329 [ 1]:0x2 00:11:56.329 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:56.329 16:03:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.329 16:03:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f5b843d472014903a878000f67a5e0ae 00:11:56.329 16:03:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f5b843d472014903a878000f67a5e0ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.329 16:03:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:56.329 16:03:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.329 16:03:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.590 16:03:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:56.851 16:03:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:56.851 16:03:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 541d8f0b-c777-4e8a-b3ff-33362a8096d0 -a 10.0.0.2 -s 4420 -i 4 00:11:56.851 16:03:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:56.851 16:03:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:56.851 16:03:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:56.851 16:03:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:56.851 16:03:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:56.851 16:03:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:59.394 [ 0]:0x2 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f5b843d472014903a878000f67a5e0ae 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f5b843d472014903a878000f67a5e0ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:59.394 16:03:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:59.394 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:59.394 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:59.394 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:59.395 [ 0]:0x1 00:11:59.395 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:59.395 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:59.395 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0c42c2f2b8f640918eacaf181de73c7f 00:11:59.395 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0c42c2f2b8f640918eacaf181de73c7f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:59.395 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:59.395 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:59.395 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:59.395 [ 1]:0x2 00:11:59.395 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:59.395 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:59.395 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f5b843d472014903a878000f67a5e0ae 00:11:59.395 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f5b843d472014903a878000f67a5e0ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:59.395 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:59.655 [ 0]:0x2 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f5b843d472014903a878000f67a5e0ae 00:11:59.655 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f5b843d472014903a878000f67a5e0ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:59.656 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:59.656 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.917 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:59.917 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:59.917 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 541d8f0b-c777-4e8a-b3ff-33362a8096d0 -a 10.0.0.2 -s 4420 -i 4 00:12:00.178 16:03:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:00.178 16:03:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:00.178 16:03:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.178 16:03:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:00.178 16:03:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:00.178 16:03:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:02.093 16:03:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:02.093 16:03:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:02.093 16:03:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.353 16:03:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:02.353 16:03:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.353 16:03:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:02.353 16:03:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:02.353 16:03:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:02.353 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:02.353 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:02.353 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:02.353 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:02.353 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:02.353 [ 0]:0x1 00:12:02.353 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:02.353 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:02.353 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0c42c2f2b8f640918eacaf181de73c7f 00:12:02.353 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0c42c2f2b8f640918eacaf181de73c7f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:02.353 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:02.353 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:02.353 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:02.614 [ 1]:0x2 00:12:02.614 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:02.614 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:02.614 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f5b843d472014903a878000f67a5e0ae 00:12:02.614 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f5b843d472014903a878000f67a5e0ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:02.614 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:02.614 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:02.614 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:02.614 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:02.614 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:02.614 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.614 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:02.614 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.614 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:02.614 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:02.614 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:02.875 [ 0]:0x2 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f5b843d472014903a878000f67a5e0ae 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f5b843d472014903a878000f67a5e0ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:02.875 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:02.875 [2024-07-15 16:03:38.704606] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:02.875 request: 00:12:02.875 { 00:12:02.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:02.875 "nsid": 2, 00:12:02.875 "host": "nqn.2016-06.io.spdk:host1", 00:12:02.875 "method": "nvmf_ns_remove_host", 00:12:02.875 "req_id": 1 00:12:02.875 } 00:12:02.875 Got JSON-RPC error response 00:12:02.875 response: 00:12:02.875 { 00:12:02.875 "code": -32602, 00:12:02.875 "message": "Invalid parameters" 00:12:02.875 } 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:03.136 [ 0]:0x2 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f5b843d472014903a878000f67a5e0ae 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f5b843d472014903a878000f67a5e0ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2185815 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2185815 /var/tmp/host.sock 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2185815 ']' 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:03.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.136 16:03:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:03.136 [2024-07-15 16:03:38.930626] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:12:03.136 [2024-07-15 16:03:38.930683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185815 ] 00:12:03.136 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.397 [2024-07-15 16:03:39.006575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.397 [2024-07-15 16:03:39.070969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.988 16:03:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:03.988 16:03:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:03.988 16:03:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.300 16:03:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:04.300 16:03:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e2dc182a-1172-4658-9342-df36dd9f3d7e 00:12:04.300 16:03:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:04.300 16:03:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E2DC182A117246589342DF36DD9F3D7E -i 00:12:04.560 16:03:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 37621c07-45ec-4069-8074-a562550b32c6 00:12:04.560 16:03:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:04.560 16:03:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 37621C0745EC40698074A562550B32C6 -i 00:12:04.560 16:03:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:04.822 16:03:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:04.822 16:03:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:04.822 16:03:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:05.083 nvme0n1 00:12:05.083 16:03:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:05.083 16:03:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:05.654 nvme1n2 00:12:05.654 16:03:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:05.654 16:03:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:05.654 16:03:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:05.654 16:03:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:05.654 16:03:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:05.654 16:03:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:05.654 16:03:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:05.654 16:03:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:05.654 16:03:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:05.915 16:03:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e2dc182a-1172-4658-9342-df36dd9f3d7e == \e\2\d\c\1\8\2\a\-\1\1\7\2\-\4\6\5\8\-\9\3\4\2\-\d\f\3\6\d\d\9\f\3\d\7\e ]] 00:12:05.915 16:03:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:05.915 16:03:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:05.915 16:03:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:05.915 16:03:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 37621c07-45ec-4069-8074-a562550b32c6 == \3\7\6\2\1\c\0\7\-\4\5\e\c\-\4\0\6\9\-\8\0\7\4\-\a\5\6\2\5\5\0\b\3\2\c\6 ]] 00:12:05.915 16:03:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2185815 00:12:05.915 16:03:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2185815 ']' 00:12:05.915 16:03:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2185815 00:12:05.915 16:03:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:05.915 16:03:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:05.915 16:03:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2185815 00:12:06.176 16:03:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:06.176 16:03:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:06.177 16:03:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2185815' 00:12:06.177 killing process with pid 2185815 00:12:06.177 16:03:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2185815 00:12:06.177 16:03:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2185815 00:12:06.177 16:03:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:06.437 rmmod nvme_tcp 00:12:06.437 rmmod nvme_fabrics 00:12:06.437 rmmod nvme_keyring 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2183415 ']' 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2183415 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2183415 ']' 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2183415 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:06.437 16:03:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2183415 00:12:06.698 16:03:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:06.698 16:03:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:06.698 16:03:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2183415' 00:12:06.698 killing process with pid 2183415 00:12:06.698 16:03:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2183415 00:12:06.698 16:03:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2183415 00:12:06.698 16:03:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:06.698 16:03:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:06.698 16:03:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:06.698 16:03:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:06.698 16:03:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:06.698 16:03:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.698 16:03:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.698 16:03:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.244 16:03:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:09.244 00:12:09.244 real 0m24.298s 00:12:09.244 user 0m24.268s 00:12:09.244 sys 0m7.262s 00:12:09.244 16:03:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.244 16:03:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:09.244 ************************************ 00:12:09.244 END TEST nvmf_ns_masking 00:12:09.244 ************************************ 00:12:09.244 16:03:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:09.244 16:03:44 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:09.244 16:03:44 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:09.244 16:03:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:09.244 16:03:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.244 16:03:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:09.244 ************************************ 00:12:09.244 START TEST nvmf_nvme_cli 00:12:09.244 ************************************ 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:09.244 * Looking for test storage... 00:12:09.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:09.244 16:03:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:15.829 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:15.829 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:15.829 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.829 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:15.830 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.830 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:16.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:12:16.090 00:12:16.090 --- 10.0.0.2 ping statistics --- 00:12:16.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.090 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:16.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:12:16.090 00:12:16.090 --- 10.0.0.1 ping statistics --- 00:12:16.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.090 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:16.090 16:03:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:16.350 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2190835 00:12:16.350 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2190835 00:12:16.350 16:03:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.350 16:03:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2190835 ']' 00:12:16.350 16:03:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.350 16:03:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:16.350 16:03:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.350 16:03:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:16.350 16:03:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:16.350 [2024-07-15 16:03:51.989051] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:12:16.350 [2024-07-15 16:03:51.989116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.350 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.350 [2024-07-15 16:03:52.060239] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.350 [2024-07-15 16:03:52.136612] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.350 [2024-07-15 16:03:52.136653] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.350 [2024-07-15 16:03:52.136661] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.350 [2024-07-15 16:03:52.136667] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.350 [2024-07-15 16:03:52.136673] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.350 [2024-07-15 16:03:52.136814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.351 [2024-07-15 16:03:52.136934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.351 [2024-07-15 16:03:52.137093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.351 [2024-07-15 16:03:52.137095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.292 [2024-07-15 16:03:52.815720] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.292 Malloc0 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.292 Malloc1 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.292 [2024-07-15 16:03:52.905555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.292 16:03:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:17.292 00:12:17.292 Discovery Log Number of Records 2, Generation counter 2 00:12:17.292 =====Discovery Log Entry 0====== 00:12:17.292 trtype: tcp 00:12:17.292 adrfam: ipv4 00:12:17.292 subtype: current discovery subsystem 00:12:17.292 treq: not required 00:12:17.292 portid: 0 00:12:17.292 trsvcid: 4420 00:12:17.292 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:17.292 traddr: 10.0.0.2 00:12:17.292 eflags: explicit discovery connections, duplicate discovery information 00:12:17.292 sectype: none 00:12:17.292 =====Discovery Log Entry 1====== 00:12:17.292 trtype: tcp 00:12:17.292 adrfam: ipv4 00:12:17.292 subtype: nvme subsystem 00:12:17.292 treq: not required 00:12:17.292 portid: 0 00:12:17.292 trsvcid: 4420 00:12:17.292 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:17.292 traddr: 10.0.0.2 00:12:17.292 eflags: none 00:12:17.292 sectype: none 00:12:17.292 16:03:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:17.292 16:03:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:17.292 16:03:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:17.292 16:03:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:17.292 16:03:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:17.292 16:03:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:17.292 16:03:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:17.292 16:03:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:17.292 16:03:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:17.292 16:03:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:17.292 16:03:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.203 16:03:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:19.203 16:03:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:19.203 16:03:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.203 16:03:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:19.203 16:03:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:19.203 16:03:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:21.108 /dev/nvme0n1 ]] 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:21.108 16:03:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:21.368 rmmod nvme_tcp 00:12:21.368 rmmod nvme_fabrics 00:12:21.368 rmmod nvme_keyring 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2190835 ']' 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2190835 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2190835 ']' 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2190835 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2190835 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2190835' 00:12:21.368 killing process with pid 2190835 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2190835 00:12:21.368 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2190835 00:12:21.628 16:03:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:21.628 16:03:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:21.628 16:03:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:21.628 16:03:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.628 16:03:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:21.628 16:03:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.628 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.628 16:03:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.173 16:03:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:24.173 00:12:24.173 real 0m14.842s 00:12:24.173 user 0m22.951s 00:12:24.173 sys 0m5.918s 00:12:24.173 16:03:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:24.173 16:03:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:24.173 ************************************ 00:12:24.173 END TEST nvmf_nvme_cli 00:12:24.173 ************************************ 00:12:24.173 16:03:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:24.173 16:03:59 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:24.173 16:03:59 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:24.173 16:03:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:24.173 16:03:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.173 16:03:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:24.173 ************************************ 00:12:24.173 START TEST nvmf_vfio_user 00:12:24.173 ************************************ 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:24.173 * Looking for test storage... 00:12:24.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2192323 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2192323' 00:12:24.173 Process pid: 2192323 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2192323 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2192323 ']' 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:24.173 16:03:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:24.173 [2024-07-15 16:03:59.694241] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:12:24.173 [2024-07-15 16:03:59.694312] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.173 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.173 [2024-07-15 16:03:59.758776] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.173 [2024-07-15 16:03:59.833560] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.173 [2024-07-15 16:03:59.833593] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.173 [2024-07-15 16:03:59.833601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.173 [2024-07-15 16:03:59.833607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.174 [2024-07-15 16:03:59.833613] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.174 [2024-07-15 16:03:59.833759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.174 [2024-07-15 16:03:59.833879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.174 [2024-07-15 16:03:59.834026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.174 [2024-07-15 16:03:59.834027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.744 16:04:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:24.744 16:04:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:24.744 16:04:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:25.685 16:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:25.945 16:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:25.945 16:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:25.945 16:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:25.945 16:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:25.945 16:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:26.205 Malloc1 00:12:26.205 16:04:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:26.205 16:04:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:26.465 16:04:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:26.726 16:04:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:26.726 16:04:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:26.726 16:04:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:26.726 Malloc2 00:12:26.726 16:04:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:26.986 16:04:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:27.247 16:04:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:27.247 16:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:27.247 16:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:27.247 16:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:27.247 16:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:27.247 16:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:27.247 16:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:27.247 [2024-07-15 16:04:03.047790] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:12:27.247 [2024-07-15 16:04:03.047835] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193020 ] 00:12:27.247 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.247 [2024-07-15 16:04:03.077037] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:27.247 [2024-07-15 16:04:03.085403] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:27.247 [2024-07-15 16:04:03.085426] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8690862000 00:12:27.247 [2024-07-15 16:04:03.086398] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.247 [2024-07-15 16:04:03.087404] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.247 [2024-07-15 16:04:03.088410] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.510 [2024-07-15 16:04:03.089405] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:27.510 [2024-07-15 16:04:03.090423] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:27.510 [2024-07-15 16:04:03.091428] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.510 [2024-07-15 16:04:03.092428] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:27.510 [2024-07-15 16:04:03.093429] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.510 [2024-07-15 16:04:03.094443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:27.510 [2024-07-15 16:04:03.094452] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8690857000 00:12:27.510 [2024-07-15 16:04:03.095778] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:27.510 [2024-07-15 16:04:03.112701] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:27.510 [2024-07-15 16:04:03.112734] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:27.510 [2024-07-15 16:04:03.115558] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:27.510 [2024-07-15 16:04:03.115605] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:27.510 [2024-07-15 16:04:03.115777] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:27.510 [2024-07-15 16:04:03.115793] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:27.510 [2024-07-15 16:04:03.115799] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:27.510 [2024-07-15 16:04:03.120129] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:27.510 [2024-07-15 16:04:03.120139] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:27.510 [2024-07-15 16:04:03.120147] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:27.510 [2024-07-15 16:04:03.120580] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:27.510 [2024-07-15 16:04:03.120590] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:27.510 [2024-07-15 16:04:03.120597] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:27.510 [2024-07-15 16:04:03.121586] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:27.510 [2024-07-15 16:04:03.121595] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:27.510 [2024-07-15 16:04:03.122591] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:27.510 [2024-07-15 16:04:03.122598] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:27.510 [2024-07-15 16:04:03.122603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:27.510 [2024-07-15 16:04:03.122610] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:27.510 [2024-07-15 16:04:03.122715] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:27.510 [2024-07-15 16:04:03.122720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:27.510 [2024-07-15 16:04:03.122725] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:27.510 [2024-07-15 16:04:03.123602] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:27.510 [2024-07-15 16:04:03.124609] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:27.510 [2024-07-15 16:04:03.125613] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:27.510 [2024-07-15 16:04:03.126613] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:27.510 [2024-07-15 16:04:03.126666] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:27.510 [2024-07-15 16:04:03.127620] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:27.510 [2024-07-15 16:04:03.127628] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:27.510 [2024-07-15 16:04:03.127633] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:27.510 [2024-07-15 16:04:03.127654] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:27.510 [2024-07-15 16:04:03.127662] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:27.510 [2024-07-15 16:04:03.127675] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:27.510 [2024-07-15 16:04:03.127680] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:27.510 [2024-07-15 16:04:03.127693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:27.511 [2024-07-15 16:04:03.127724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:27.511 [2024-07-15 16:04:03.127732] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:27.511 [2024-07-15 16:04:03.127739] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:27.511 [2024-07-15 16:04:03.127747] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:27.511 [2024-07-15 16:04:03.127752] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:27.511 [2024-07-15 16:04:03.127756] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:27.511 [2024-07-15 16:04:03.127761] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:27.511 [2024-07-15 16:04:03.127765] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.127773] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.127783] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:27.511 [2024-07-15 16:04:03.127792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:27.511 [2024-07-15 16:04:03.127805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.511 [2024-07-15 16:04:03.127813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.511 [2024-07-15 16:04:03.127822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.511 [2024-07-15 16:04:03.127830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.511 [2024-07-15 16:04:03.127834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.127842] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.127852] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:27.511 [2024-07-15 16:04:03.127861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:27.511 [2024-07-15 16:04:03.127866] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:27.511 [2024-07-15 16:04:03.127871] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.127878] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.127883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.127892] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:27.511 [2024-07-15 16:04:03.127902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:27.511 [2024-07-15 16:04:03.127961] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.127969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.127977] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:27.511 [2024-07-15 16:04:03.127983] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:27.511 [2024-07-15 16:04:03.127989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:27.511 [2024-07-15 16:04:03.128002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:27.511 [2024-07-15 16:04:03.128011] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:27.511 [2024-07-15 16:04:03.128022] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.128030] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.128037] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:27.511 [2024-07-15 16:04:03.128041] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:27.511 [2024-07-15 16:04:03.128047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:27.511 [2024-07-15 16:04:03.128063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:27.511 [2024-07-15 16:04:03.128074] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.128082] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.128088] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:27.511 [2024-07-15 16:04:03.128093] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:27.511 [2024-07-15 16:04:03.128099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:27.511 [2024-07-15 16:04:03.128106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:27.511 [2024-07-15 16:04:03.128113] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.128120] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.128132] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.128138] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.128143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.128148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.128153] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:27.511 [2024-07-15 16:04:03.128158] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:27.511 [2024-07-15 16:04:03.128163] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:27.511 [2024-07-15 16:04:03.128181] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:27.511 [2024-07-15 16:04:03.128192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:27.511 [2024-07-15 16:04:03.128203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:27.511 [2024-07-15 16:04:03.128215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:27.511 [2024-07-15 16:04:03.128226] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:27.511 [2024-07-15 16:04:03.128233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:27.511 [2024-07-15 16:04:03.128244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:27.511 [2024-07-15 16:04:03.128251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:27.511 [2024-07-15 16:04:03.128264] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:27.511 [2024-07-15 16:04:03.128269] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:27.511 [2024-07-15 16:04:03.128272] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:27.511 [2024-07-15 16:04:03.128276] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:27.511 [2024-07-15 16:04:03.128282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:27.511 [2024-07-15 16:04:03.128289] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:27.511 [2024-07-15 16:04:03.128293] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:27.511 [2024-07-15 16:04:03.128299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:27.511 [2024-07-15 16:04:03.128307] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:27.511 [2024-07-15 16:04:03.128311] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:27.511 [2024-07-15 16:04:03.128317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:27.511 [2024-07-15 16:04:03.128324] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:27.511 [2024-07-15 16:04:03.128329] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:27.511 [2024-07-15 16:04:03.128334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:27.511 [2024-07-15 16:04:03.128342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:27.511 [2024-07-15 16:04:03.128353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:27.511 [2024-07-15 16:04:03.128363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:27.511 [2024-07-15 16:04:03.128371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:27.511 ===================================================== 00:12:27.511 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:27.511 ===================================================== 00:12:27.511 Controller Capabilities/Features 00:12:27.511 ================================ 00:12:27.511 Vendor ID: 4e58 00:12:27.511 Subsystem Vendor ID: 4e58 00:12:27.511 Serial Number: SPDK1 00:12:27.511 Model Number: SPDK bdev Controller 00:12:27.511 Firmware Version: 24.09 00:12:27.511 Recommended Arb Burst: 6 00:12:27.511 IEEE OUI Identifier: 8d 6b 50 00:12:27.511 Multi-path I/O 00:12:27.511 May have multiple subsystem ports: Yes 00:12:27.511 May have multiple controllers: Yes 00:12:27.511 Associated with SR-IOV VF: No 00:12:27.512 Max Data Transfer Size: 131072 00:12:27.512 Max Number of Namespaces: 32 00:12:27.512 Max Number of I/O Queues: 127 00:12:27.512 NVMe Specification Version (VS): 1.3 00:12:27.512 NVMe Specification Version (Identify): 1.3 00:12:27.512 Maximum Queue Entries: 256 00:12:27.512 Contiguous Queues Required: Yes 00:12:27.512 Arbitration Mechanisms Supported 00:12:27.512 Weighted Round Robin: Not Supported 00:12:27.512 Vendor Specific: Not Supported 00:12:27.512 Reset Timeout: 15000 ms 00:12:27.512 Doorbell Stride: 4 bytes 00:12:27.512 NVM Subsystem Reset: Not Supported 00:12:27.512 Command Sets Supported 00:12:27.512 NVM Command Set: Supported 00:12:27.512 Boot Partition: Not Supported 00:12:27.512 Memory Page Size Minimum: 4096 bytes 00:12:27.512 Memory Page Size Maximum: 4096 bytes 00:12:27.512 Persistent Memory Region: Not Supported 00:12:27.512 Optional Asynchronous Events Supported 00:12:27.512 Namespace Attribute Notices: Supported 00:12:27.512 Firmware Activation Notices: Not Supported 00:12:27.512 ANA Change Notices: Not Supported 00:12:27.512 PLE Aggregate Log Change Notices: Not Supported 00:12:27.512 LBA Status Info Alert Notices: Not Supported 00:12:27.512 EGE Aggregate Log Change Notices: Not Supported 00:12:27.512 Normal NVM Subsystem Shutdown event: Not Supported 00:12:27.512 Zone Descriptor Change Notices: Not Supported 00:12:27.512 Discovery Log Change Notices: Not Supported 00:12:27.512 Controller Attributes 00:12:27.512 128-bit Host Identifier: Supported 00:12:27.512 Non-Operational Permissive Mode: Not Supported 00:12:27.512 NVM Sets: Not Supported 00:12:27.512 Read Recovery Levels: Not Supported 00:12:27.512 Endurance Groups: Not Supported 00:12:27.512 Predictable Latency Mode: Not Supported 00:12:27.512 Traffic Based Keep ALive: Not Supported 00:12:27.512 Namespace Granularity: Not Supported 00:12:27.512 SQ Associations: Not Supported 00:12:27.512 UUID List: Not Supported 00:12:27.512 Multi-Domain Subsystem: Not Supported 00:12:27.512 Fixed Capacity Management: Not Supported 00:12:27.512 Variable Capacity Management: Not Supported 00:12:27.512 Delete Endurance Group: Not Supported 00:12:27.512 Delete NVM Set: Not Supported 00:12:27.512 Extended LBA Formats Supported: Not Supported 00:12:27.512 Flexible Data Placement Supported: Not Supported 00:12:27.512 00:12:27.512 Controller Memory Buffer Support 00:12:27.512 ================================ 00:12:27.512 Supported: No 00:12:27.512 00:12:27.512 Persistent Memory Region Support 00:12:27.512 ================================ 00:12:27.512 Supported: No 00:12:27.512 00:12:27.512 Admin Command Set Attributes 00:12:27.512 ============================ 00:12:27.512 Security Send/Receive: Not Supported 00:12:27.512 Format NVM: Not Supported 00:12:27.512 Firmware Activate/Download: Not Supported 00:12:27.512 Namespace Management: Not Supported 00:12:27.512 Device Self-Test: Not Supported 00:12:27.512 Directives: Not Supported 00:12:27.512 NVMe-MI: Not Supported 00:12:27.512 Virtualization Management: Not Supported 00:12:27.512 Doorbell Buffer Config: Not Supported 00:12:27.512 Get LBA Status Capability: Not Supported 00:12:27.512 Command & Feature Lockdown Capability: Not Supported 00:12:27.512 Abort Command Limit: 4 00:12:27.512 Async Event Request Limit: 4 00:12:27.512 Number of Firmware Slots: N/A 00:12:27.512 Firmware Slot 1 Read-Only: N/A 00:12:27.512 Firmware Activation Without Reset: N/A 00:12:27.512 Multiple Update Detection Support: N/A 00:12:27.512 Firmware Update Granularity: No Information Provided 00:12:27.512 Per-Namespace SMART Log: No 00:12:27.512 Asymmetric Namespace Access Log Page: Not Supported 00:12:27.512 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:27.512 Command Effects Log Page: Supported 00:12:27.512 Get Log Page Extended Data: Supported 00:12:27.512 Telemetry Log Pages: Not Supported 00:12:27.512 Persistent Event Log Pages: Not Supported 00:12:27.512 Supported Log Pages Log Page: May Support 00:12:27.512 Commands Supported & Effects Log Page: Not Supported 00:12:27.512 Feature Identifiers & Effects Log Page:May Support 00:12:27.512 NVMe-MI Commands & Effects Log Page: May Support 00:12:27.512 Data Area 4 for Telemetry Log: Not Supported 00:12:27.512 Error Log Page Entries Supported: 128 00:12:27.512 Keep Alive: Supported 00:12:27.512 Keep Alive Granularity: 10000 ms 00:12:27.512 00:12:27.512 NVM Command Set Attributes 00:12:27.512 ========================== 00:12:27.512 Submission Queue Entry Size 00:12:27.512 Max: 64 00:12:27.512 Min: 64 00:12:27.512 Completion Queue Entry Size 00:12:27.512 Max: 16 00:12:27.512 Min: 16 00:12:27.512 Number of Namespaces: 32 00:12:27.512 Compare Command: Supported 00:12:27.512 Write Uncorrectable Command: Not Supported 00:12:27.512 Dataset Management Command: Supported 00:12:27.512 Write Zeroes Command: Supported 00:12:27.512 Set Features Save Field: Not Supported 00:12:27.512 Reservations: Not Supported 00:12:27.512 Timestamp: Not Supported 00:12:27.512 Copy: Supported 00:12:27.512 Volatile Write Cache: Present 00:12:27.512 Atomic Write Unit (Normal): 1 00:12:27.512 Atomic Write Unit (PFail): 1 00:12:27.512 Atomic Compare & Write Unit: 1 00:12:27.512 Fused Compare & Write: Supported 00:12:27.512 Scatter-Gather List 00:12:27.512 SGL Command Set: Supported (Dword aligned) 00:12:27.512 SGL Keyed: Not Supported 00:12:27.512 SGL Bit Bucket Descriptor: Not Supported 00:12:27.512 SGL Metadata Pointer: Not Supported 00:12:27.512 Oversized SGL: Not Supported 00:12:27.512 SGL Metadata Address: Not Supported 00:12:27.512 SGL Offset: Not Supported 00:12:27.512 Transport SGL Data Block: Not Supported 00:12:27.512 Replay Protected Memory Block: Not Supported 00:12:27.512 00:12:27.512 Firmware Slot Information 00:12:27.512 ========================= 00:12:27.512 Active slot: 1 00:12:27.512 Slot 1 Firmware Revision: 24.09 00:12:27.512 00:12:27.512 00:12:27.512 Commands Supported and Effects 00:12:27.512 ============================== 00:12:27.512 Admin Commands 00:12:27.512 -------------- 00:12:27.512 Get Log Page (02h): Supported 00:12:27.512 Identify (06h): Supported 00:12:27.512 Abort (08h): Supported 00:12:27.512 Set Features (09h): Supported 00:12:27.512 Get Features (0Ah): Supported 00:12:27.512 Asynchronous Event Request (0Ch): Supported 00:12:27.512 Keep Alive (18h): Supported 00:12:27.512 I/O Commands 00:12:27.512 ------------ 00:12:27.512 Flush (00h): Supported LBA-Change 00:12:27.512 Write (01h): Supported LBA-Change 00:12:27.512 Read (02h): Supported 00:12:27.512 Compare (05h): Supported 00:12:27.512 Write Zeroes (08h): Supported LBA-Change 00:12:27.512 Dataset Management (09h): Supported LBA-Change 00:12:27.512 Copy (19h): Supported LBA-Change 00:12:27.512 00:12:27.512 Error Log 00:12:27.512 ========= 00:12:27.512 00:12:27.512 Arbitration 00:12:27.512 =========== 00:12:27.512 Arbitration Burst: 1 00:12:27.512 00:12:27.512 Power Management 00:12:27.512 ================ 00:12:27.512 Number of Power States: 1 00:12:27.512 Current Power State: Power State #0 00:12:27.512 Power State #0: 00:12:27.512 Max Power: 0.00 W 00:12:27.512 Non-Operational State: Operational 00:12:27.512 Entry Latency: Not Reported 00:12:27.512 Exit Latency: Not Reported 00:12:27.512 Relative Read Throughput: 0 00:12:27.512 Relative Read Latency: 0 00:12:27.512 Relative Write Throughput: 0 00:12:27.512 Relative Write Latency: 0 00:12:27.512 Idle Power: Not Reported 00:12:27.512 Active Power: Not Reported 00:12:27.512 Non-Operational Permissive Mode: Not Supported 00:12:27.513 00:12:27.513 Health Information 00:12:27.513 ================== 00:12:27.513 Critical Warnings: 00:12:27.513 Available Spare Space: OK 00:12:27.513 Temperature: OK 00:12:27.513 Device Reliability: OK 00:12:27.513 Read Only: No 00:12:27.513 Volatile Memory Backup: OK 00:12:27.513 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:27.513 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:27.513 Available Spare: 0% 00:12:27.513 Available Sp[2024-07-15 16:04:03.128472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:27.513 [2024-07-15 16:04:03.128481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:27.513 [2024-07-15 16:04:03.128510] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:27.513 [2024-07-15 16:04:03.128520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.513 [2024-07-15 16:04:03.128526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.513 [2024-07-15 16:04:03.128533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.513 [2024-07-15 16:04:03.128539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.513 [2024-07-15 16:04:03.128626] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:27.513 [2024-07-15 16:04:03.128636] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:27.513 [2024-07-15 16:04:03.129628] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:27.513 [2024-07-15 16:04:03.129669] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:27.513 [2024-07-15 16:04:03.129676] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:27.513 [2024-07-15 16:04:03.130630] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:27.513 [2024-07-15 16:04:03.130642] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:27.513 [2024-07-15 16:04:03.130698] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:27.513 [2024-07-15 16:04:03.132663] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:27.513 are Threshold: 0% 00:12:27.513 Life Percentage Used: 0% 00:12:27.513 Data Units Read: 0 00:12:27.513 Data Units Written: 0 00:12:27.513 Host Read Commands: 0 00:12:27.513 Host Write Commands: 0 00:12:27.513 Controller Busy Time: 0 minutes 00:12:27.513 Power Cycles: 0 00:12:27.513 Power On Hours: 0 hours 00:12:27.513 Unsafe Shutdowns: 0 00:12:27.513 Unrecoverable Media Errors: 0 00:12:27.513 Lifetime Error Log Entries: 0 00:12:27.513 Warning Temperature Time: 0 minutes 00:12:27.513 Critical Temperature Time: 0 minutes 00:12:27.513 00:12:27.513 Number of Queues 00:12:27.513 ================ 00:12:27.513 Number of I/O Submission Queues: 127 00:12:27.513 Number of I/O Completion Queues: 127 00:12:27.513 00:12:27.513 Active Namespaces 00:12:27.513 ================= 00:12:27.513 Namespace ID:1 00:12:27.513 Error Recovery Timeout: Unlimited 00:12:27.513 Command Set Identifier: NVM (00h) 00:12:27.513 Deallocate: Supported 00:12:27.513 Deallocated/Unwritten Error: Not Supported 00:12:27.513 Deallocated Read Value: Unknown 00:12:27.513 Deallocate in Write Zeroes: Not Supported 00:12:27.513 Deallocated Guard Field: 0xFFFF 00:12:27.513 Flush: Supported 00:12:27.513 Reservation: Supported 00:12:27.513 Namespace Sharing Capabilities: Multiple Controllers 00:12:27.513 Size (in LBAs): 131072 (0GiB) 00:12:27.513 Capacity (in LBAs): 131072 (0GiB) 00:12:27.513 Utilization (in LBAs): 131072 (0GiB) 00:12:27.513 NGUID: 4AB68AA5A89341268E02EC0F634D1D1E 00:12:27.513 UUID: 4ab68aa5-a893-4126-8e02-ec0f634d1d1e 00:12:27.513 Thin Provisioning: Not Supported 00:12:27.513 Per-NS Atomic Units: Yes 00:12:27.513 Atomic Boundary Size (Normal): 0 00:12:27.513 Atomic Boundary Size (PFail): 0 00:12:27.513 Atomic Boundary Offset: 0 00:12:27.513 Maximum Single Source Range Length: 65535 00:12:27.513 Maximum Copy Length: 65535 00:12:27.513 Maximum Source Range Count: 1 00:12:27.513 NGUID/EUI64 Never Reused: No 00:12:27.513 Namespace Write Protected: No 00:12:27.513 Number of LBA Formats: 1 00:12:27.513 Current LBA Format: LBA Format #00 00:12:27.513 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:27.513 00:12:27.513 16:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:27.513 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.513 [2024-07-15 16:04:03.314742] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:32.976 Initializing NVMe Controllers 00:12:32.976 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:32.976 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:32.976 Initialization complete. Launching workers. 00:12:32.976 ======================================================== 00:12:32.976 Latency(us) 00:12:32.976 Device Information : IOPS MiB/s Average min max 00:12:32.976 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39961.50 156.10 3202.95 830.12 6824.13 00:12:32.976 ======================================================== 00:12:32.976 Total : 39961.50 156.10 3202.95 830.12 6824.13 00:12:32.976 00:12:32.976 [2024-07-15 16:04:08.334588] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:32.976 16:04:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:32.976 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.976 [2024-07-15 16:04:08.519452] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:38.284 Initializing NVMe Controllers 00:12:38.284 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:38.284 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:38.284 Initialization complete. Launching workers. 00:12:38.284 ======================================================== 00:12:38.284 Latency(us) 00:12:38.284 Device Information : IOPS MiB/s Average min max 00:12:38.284 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16037.53 62.65 7986.85 5360.99 14813.23 00:12:38.284 ======================================================== 00:12:38.284 Total : 16037.53 62.65 7986.85 5360.99 14813.23 00:12:38.284 00:12:38.284 [2024-07-15 16:04:13.561980] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:38.284 16:04:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:38.284 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.284 [2024-07-15 16:04:13.751841] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:43.606 [2024-07-15 16:04:18.825364] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:43.606 Initializing NVMe Controllers 00:12:43.606 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:43.606 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:43.606 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:43.606 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:43.606 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:43.606 Initialization complete. Launching workers. 00:12:43.606 Starting thread on core 2 00:12:43.606 Starting thread on core 3 00:12:43.606 Starting thread on core 1 00:12:43.606 16:04:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:43.606 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.606 [2024-07-15 16:04:19.085956] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:46.906 [2024-07-15 16:04:22.139346] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:46.906 Initializing NVMe Controllers 00:12:46.906 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:46.906 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:46.906 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:46.906 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:46.906 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:46.906 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:46.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:46.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:46.906 Initialization complete. Launching workers. 00:12:46.906 Starting thread on core 1 with urgent priority queue 00:12:46.906 Starting thread on core 2 with urgent priority queue 00:12:46.906 Starting thread on core 3 with urgent priority queue 00:12:46.906 Starting thread on core 0 with urgent priority queue 00:12:46.906 SPDK bdev Controller (SPDK1 ) core 0: 6594.00 IO/s 15.17 secs/100000 ios 00:12:46.906 SPDK bdev Controller (SPDK1 ) core 1: 7099.00 IO/s 14.09 secs/100000 ios 00:12:46.906 SPDK bdev Controller (SPDK1 ) core 2: 5427.00 IO/s 18.43 secs/100000 ios 00:12:46.906 SPDK bdev Controller (SPDK1 ) core 3: 8080.67 IO/s 12.38 secs/100000 ios 00:12:46.906 ======================================================== 00:12:46.906 00:12:46.906 16:04:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:46.906 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.906 [2024-07-15 16:04:22.402571] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:46.906 Initializing NVMe Controllers 00:12:46.906 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:46.906 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:46.906 Namespace ID: 1 size: 0GB 00:12:46.906 Initialization complete. 00:12:46.906 INFO: using host memory buffer for IO 00:12:46.906 Hello world! 00:12:46.906 [2024-07-15 16:04:22.435760] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:46.906 16:04:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:46.906 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.906 [2024-07-15 16:04:22.697602] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:47.882 Initializing NVMe Controllers 00:12:47.882 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:47.882 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:47.882 Initialization complete. Launching workers. 00:12:47.882 submit (in ns) avg, min, max = 8166.5, 3894.2, 5993772.5 00:12:47.882 complete (in ns) avg, min, max = 18327.4, 2366.7, 5992411.7 00:12:47.882 00:12:47.882 Submit histogram 00:12:47.882 ================ 00:12:47.882 Range in us Cumulative Count 00:12:47.882 3.893 - 3.920: 1.9602% ( 381) 00:12:47.882 3.920 - 3.947: 7.2696% ( 1032) 00:12:47.882 3.947 - 3.973: 16.3245% ( 1760) 00:12:47.882 3.973 - 4.000: 27.8335% ( 2237) 00:12:47.882 4.000 - 4.027: 38.6171% ( 2096) 00:12:47.882 4.027 - 4.053: 51.3196% ( 2469) 00:12:47.882 4.053 - 4.080: 68.1381% ( 3269) 00:12:47.882 4.080 - 4.107: 82.4870% ( 2789) 00:12:47.882 4.107 - 4.133: 91.8094% ( 1812) 00:12:47.882 4.133 - 4.160: 96.7279% ( 956) 00:12:47.882 4.160 - 4.187: 98.5029% ( 345) 00:12:47.882 4.187 - 4.213: 99.1151% ( 119) 00:12:47.882 4.213 - 4.240: 99.3106% ( 38) 00:12:47.882 4.240 - 4.267: 99.3415% ( 6) 00:12:47.882 4.267 - 4.293: 99.3518% ( 2) 00:12:47.882 4.293 - 4.320: 99.3569% ( 1) 00:12:47.882 4.347 - 4.373: 99.3620% ( 1) 00:12:47.882 4.427 - 4.453: 99.3723% ( 2) 00:12:47.882 4.480 - 4.507: 99.3826% ( 2) 00:12:47.882 4.613 - 4.640: 99.3878% ( 1) 00:12:47.882 4.720 - 4.747: 99.3929% ( 1) 00:12:47.882 4.773 - 4.800: 99.3981% ( 1) 00:12:47.882 4.907 - 4.933: 99.4032% ( 1) 00:12:47.882 4.933 - 4.960: 99.4083% ( 1) 00:12:47.882 5.067 - 5.093: 99.4135% ( 1) 00:12:47.882 5.360 - 5.387: 99.4186% ( 1) 00:12:47.882 5.467 - 5.493: 99.4238% ( 1) 00:12:47.882 5.493 - 5.520: 99.4289% ( 1) 00:12:47.883 5.573 - 5.600: 99.4341% ( 1) 00:12:47.883 5.600 - 5.627: 99.4392% ( 1) 00:12:47.883 5.627 - 5.653: 99.4444% ( 1) 00:12:47.883 5.760 - 5.787: 99.4495% ( 1) 00:12:47.883 5.787 - 5.813: 99.4546% ( 1) 00:12:47.883 5.813 - 5.840: 99.4598% ( 1) 00:12:47.883 5.840 - 5.867: 99.4649% ( 1) 00:12:47.883 5.867 - 5.893: 99.4752% ( 2) 00:12:47.883 5.920 - 5.947: 99.4855% ( 2) 00:12:47.883 5.947 - 5.973: 99.5061% ( 4) 00:12:47.883 5.973 - 6.000: 99.5267% ( 4) 00:12:47.883 6.000 - 6.027: 99.5421% ( 3) 00:12:47.883 6.027 - 6.053: 99.5575% ( 3) 00:12:47.883 6.053 - 6.080: 99.5730% ( 3) 00:12:47.883 6.107 - 6.133: 99.5833% ( 2) 00:12:47.883 6.133 - 6.160: 99.5884% ( 1) 00:12:47.883 6.160 - 6.187: 99.5936% ( 1) 00:12:47.883 6.187 - 6.213: 99.6141% ( 4) 00:12:47.883 6.213 - 6.240: 99.6193% ( 1) 00:12:47.883 6.240 - 6.267: 99.6347% ( 3) 00:12:47.883 6.267 - 6.293: 99.6450% ( 2) 00:12:47.883 6.320 - 6.347: 99.6604% ( 3) 00:12:47.883 6.347 - 6.373: 99.6707% ( 2) 00:12:47.883 6.373 - 6.400: 99.6810% ( 2) 00:12:47.883 6.400 - 6.427: 99.6862% ( 1) 00:12:47.883 6.427 - 6.453: 99.6965% ( 2) 00:12:47.883 6.453 - 6.480: 99.7067% ( 2) 00:12:47.883 6.480 - 6.507: 99.7119% ( 1) 00:12:47.883 6.507 - 6.533: 99.7273% ( 3) 00:12:47.883 6.533 - 6.560: 99.7479% ( 4) 00:12:47.883 6.613 - 6.640: 99.7530% ( 1) 00:12:47.883 6.667 - 6.693: 99.7582% ( 1) 00:12:47.883 6.693 - 6.720: 99.7633% ( 1) 00:12:47.883 6.720 - 6.747: 99.7736% ( 2) 00:12:47.883 6.747 - 6.773: 99.7788% ( 1) 00:12:47.883 6.773 - 6.800: 99.7891% ( 2) 00:12:47.883 6.800 - 6.827: 99.7942% ( 1) 00:12:47.883 6.827 - 6.880: 99.8045% ( 2) 00:12:47.883 6.880 - 6.933: 99.8199% ( 3) 00:12:47.883 6.933 - 6.987: 99.8251% ( 1) 00:12:47.883 6.987 - 7.040: 99.8354% ( 2) 00:12:47.883 7.093 - 7.147: 99.8405% ( 1) 00:12:47.883 7.147 - 7.200: 99.8508% ( 2) 00:12:47.883 7.253 - 7.307: 99.8559% ( 1) 00:12:47.883 7.413 - 7.467: 99.8611% ( 1) 00:12:47.883 7.467 - 7.520: 99.8662% ( 1) 00:12:47.883 7.573 - 7.627: 99.8714% ( 1) 00:12:47.883 7.680 - 7.733: 99.8765% ( 1) 00:12:47.883 7.893 - 7.947: 99.8817% ( 1) 00:12:47.883 8.160 - 8.213: 99.8868% ( 1) 00:12:47.883 8.267 - 8.320: 99.8920% ( 1) 00:12:47.883 [2024-07-15 16:04:23.718234] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:48.143 12.800 - 12.853: 99.8971% ( 1) 00:12:48.143 13.973 - 14.080: 99.9022% ( 1) 00:12:48.143 3986.773 - 4014.080: 99.9897% ( 17) 00:12:48.143 5980.160 - 6007.467: 100.0000% ( 2) 00:12:48.143 00:12:48.143 Complete histogram 00:12:48.143 ================== 00:12:48.143 Range in us Cumulative Count 00:12:48.143 2.360 - 2.373: 0.0051% ( 1) 00:12:48.143 2.373 - 2.387: 0.0257% ( 4) 00:12:48.143 2.387 - 2.400: 1.0341% ( 196) 00:12:48.143 2.400 - 2.413: 1.1216% ( 17) 00:12:48.143 2.413 - 2.427: 1.3428% ( 43) 00:12:48.143 2.427 - 2.440: 1.4097% ( 13) 00:12:48.143 2.440 - 2.453: 11.3906% ( 1940) 00:12:48.143 2.453 - 2.467: 51.3145% ( 7760) 00:12:48.143 2.467 - 2.480: 61.1720% ( 1916) 00:12:48.143 2.480 - 2.493: 74.1884% ( 2530) 00:12:48.143 2.493 - 2.507: 80.2593% ( 1180) 00:12:48.143 2.507 - 2.520: 82.3738% ( 411) 00:12:48.143 2.520 - 2.533: 87.7759% ( 1050) 00:12:48.143 2.533 - 2.547: 93.0185% ( 1019) 00:12:48.143 2.547 - 2.560: 95.9407% ( 568) 00:12:48.143 2.560 - 2.573: 98.1839% ( 436) 00:12:48.143 2.573 - 2.587: 99.1511% ( 188) 00:12:48.143 2.587 - 2.600: 99.4083% ( 50) 00:12:48.143 2.600 - 2.613: 99.4341% ( 5) 00:12:48.143 2.613 - 2.627: 99.4392% ( 1) 00:12:48.143 4.160 - 4.187: 99.4444% ( 1) 00:12:48.143 4.293 - 4.320: 99.4495% ( 1) 00:12:48.143 4.373 - 4.400: 99.4546% ( 1) 00:12:48.143 4.560 - 4.587: 99.4598% ( 1) 00:12:48.143 4.587 - 4.613: 99.4649% ( 1) 00:12:48.143 4.613 - 4.640: 99.4701% ( 1) 00:12:48.143 4.667 - 4.693: 99.4752% ( 1) 00:12:48.143 4.693 - 4.720: 99.4804% ( 1) 00:12:48.143 4.720 - 4.747: 99.4855% ( 1) 00:12:48.143 4.773 - 4.800: 99.4958% ( 2) 00:12:48.143 4.853 - 4.880: 99.5061% ( 2) 00:12:48.143 4.880 - 4.907: 99.5112% ( 1) 00:12:48.143 5.093 - 5.120: 99.5164% ( 1) 00:12:48.143 5.120 - 5.147: 99.5215% ( 1) 00:12:48.143 5.307 - 5.333: 99.5267% ( 1) 00:12:48.143 5.360 - 5.387: 99.5421% ( 3) 00:12:48.143 5.413 - 5.440: 99.5524% ( 2) 00:12:48.143 5.547 - 5.573: 99.5575% ( 1) 00:12:48.143 5.680 - 5.707: 99.5627% ( 1) 00:12:48.143 5.733 - 5.760: 99.5678% ( 1) 00:12:48.143 5.867 - 5.893: 99.5730% ( 1) 00:12:48.143 6.240 - 6.267: 99.5781% ( 1) 00:12:48.143 8.053 - 8.107: 99.5833% ( 1) 00:12:48.143 15.040 - 15.147: 99.5884% ( 1) 00:12:48.143 44.587 - 44.800: 99.5936% ( 1) 00:12:48.143 171.520 - 172.373: 99.5987% ( 1) 00:12:48.143 2020.693 - 2034.347: 99.6038% ( 1) 00:12:48.143 2034.347 - 2048.000: 99.6141% ( 2) 00:12:48.143 2061.653 - 2075.307: 99.6193% ( 1) 00:12:48.143 3986.773 - 4014.080: 99.9897% ( 72) 00:12:48.143 5980.160 - 6007.467: 100.0000% ( 2) 00:12:48.143 00:12:48.143 16:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:48.143 16:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:48.143 16:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:48.143 16:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:48.143 16:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:48.143 [ 00:12:48.143 { 00:12:48.143 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:48.143 "subtype": "Discovery", 00:12:48.143 "listen_addresses": [], 00:12:48.143 "allow_any_host": true, 00:12:48.143 "hosts": [] 00:12:48.143 }, 00:12:48.143 { 00:12:48.143 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:48.143 "subtype": "NVMe", 00:12:48.143 "listen_addresses": [ 00:12:48.143 { 00:12:48.143 "trtype": "VFIOUSER", 00:12:48.143 "adrfam": "IPv4", 00:12:48.143 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:48.143 "trsvcid": "0" 00:12:48.143 } 00:12:48.143 ], 00:12:48.143 "allow_any_host": true, 00:12:48.143 "hosts": [], 00:12:48.143 "serial_number": "SPDK1", 00:12:48.143 "model_number": "SPDK bdev Controller", 00:12:48.143 "max_namespaces": 32, 00:12:48.143 "min_cntlid": 1, 00:12:48.143 "max_cntlid": 65519, 00:12:48.143 "namespaces": [ 00:12:48.143 { 00:12:48.143 "nsid": 1, 00:12:48.143 "bdev_name": "Malloc1", 00:12:48.143 "name": "Malloc1", 00:12:48.143 "nguid": "4AB68AA5A89341268E02EC0F634D1D1E", 00:12:48.143 "uuid": "4ab68aa5-a893-4126-8e02-ec0f634d1d1e" 00:12:48.143 } 00:12:48.143 ] 00:12:48.143 }, 00:12:48.143 { 00:12:48.143 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:48.143 "subtype": "NVMe", 00:12:48.143 "listen_addresses": [ 00:12:48.143 { 00:12:48.143 "trtype": "VFIOUSER", 00:12:48.143 "adrfam": "IPv4", 00:12:48.143 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:48.143 "trsvcid": "0" 00:12:48.143 } 00:12:48.143 ], 00:12:48.143 "allow_any_host": true, 00:12:48.143 "hosts": [], 00:12:48.143 "serial_number": "SPDK2", 00:12:48.143 "model_number": "SPDK bdev Controller", 00:12:48.143 "max_namespaces": 32, 00:12:48.143 "min_cntlid": 1, 00:12:48.143 "max_cntlid": 65519, 00:12:48.143 "namespaces": [ 00:12:48.143 { 00:12:48.143 "nsid": 1, 00:12:48.143 "bdev_name": "Malloc2", 00:12:48.143 "name": "Malloc2", 00:12:48.143 "nguid": "ED12C6415E074027BB9BEC7DE693214D", 00:12:48.143 "uuid": "ed12c641-5e07-4027-bb9b-ec7de693214d" 00:12:48.143 } 00:12:48.143 ] 00:12:48.143 } 00:12:48.143 ] 00:12:48.143 16:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:48.143 16:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2197129 00:12:48.143 16:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:48.143 16:04:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:48.143 16:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:48.143 16:04:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:48.143 16:04:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:48.143 16:04:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:48.143 16:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:48.143 16:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:48.404 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.404 Malloc3 00:12:48.404 [2024-07-15 16:04:24.106519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:48.404 16:04:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:48.665 [2024-07-15 16:04:24.276611] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:48.665 16:04:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:48.665 Asynchronous Event Request test 00:12:48.665 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:48.665 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:48.665 Registering asynchronous event callbacks... 00:12:48.665 Starting namespace attribute notice tests for all controllers... 00:12:48.665 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:48.665 aer_cb - Changed Namespace 00:12:48.665 Cleaning up... 00:12:48.665 [ 00:12:48.665 { 00:12:48.665 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:48.665 "subtype": "Discovery", 00:12:48.665 "listen_addresses": [], 00:12:48.665 "allow_any_host": true, 00:12:48.665 "hosts": [] 00:12:48.665 }, 00:12:48.665 { 00:12:48.665 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:48.665 "subtype": "NVMe", 00:12:48.665 "listen_addresses": [ 00:12:48.665 { 00:12:48.665 "trtype": "VFIOUSER", 00:12:48.665 "adrfam": "IPv4", 00:12:48.665 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:48.665 "trsvcid": "0" 00:12:48.665 } 00:12:48.665 ], 00:12:48.665 "allow_any_host": true, 00:12:48.665 "hosts": [], 00:12:48.665 "serial_number": "SPDK1", 00:12:48.665 "model_number": "SPDK bdev Controller", 00:12:48.665 "max_namespaces": 32, 00:12:48.665 "min_cntlid": 1, 00:12:48.665 "max_cntlid": 65519, 00:12:48.665 "namespaces": [ 00:12:48.665 { 00:12:48.665 "nsid": 1, 00:12:48.665 "bdev_name": "Malloc1", 00:12:48.665 "name": "Malloc1", 00:12:48.665 "nguid": "4AB68AA5A89341268E02EC0F634D1D1E", 00:12:48.665 "uuid": "4ab68aa5-a893-4126-8e02-ec0f634d1d1e" 00:12:48.665 }, 00:12:48.665 { 00:12:48.665 "nsid": 2, 00:12:48.665 "bdev_name": "Malloc3", 00:12:48.665 "name": "Malloc3", 00:12:48.665 "nguid": "61F832A627084B0391D3293CDF544977", 00:12:48.665 "uuid": "61f832a6-2708-4b03-91d3-293cdf544977" 00:12:48.665 } 00:12:48.665 ] 00:12:48.665 }, 00:12:48.665 { 00:12:48.665 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:48.665 "subtype": "NVMe", 00:12:48.665 "listen_addresses": [ 00:12:48.665 { 00:12:48.665 "trtype": "VFIOUSER", 00:12:48.665 "adrfam": "IPv4", 00:12:48.665 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:48.665 "trsvcid": "0" 00:12:48.665 } 00:12:48.665 ], 00:12:48.665 "allow_any_host": true, 00:12:48.665 "hosts": [], 00:12:48.665 "serial_number": "SPDK2", 00:12:48.665 "model_number": "SPDK bdev Controller", 00:12:48.665 "max_namespaces": 32, 00:12:48.665 "min_cntlid": 1, 00:12:48.665 "max_cntlid": 65519, 00:12:48.665 "namespaces": [ 00:12:48.665 { 00:12:48.665 "nsid": 1, 00:12:48.665 "bdev_name": "Malloc2", 00:12:48.665 "name": "Malloc2", 00:12:48.665 "nguid": "ED12C6415E074027BB9BEC7DE693214D", 00:12:48.665 "uuid": "ed12c641-5e07-4027-bb9b-ec7de693214d" 00:12:48.665 } 00:12:48.665 ] 00:12:48.665 } 00:12:48.665 ] 00:12:48.665 16:04:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2197129 00:12:48.665 16:04:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:48.665 16:04:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:48.665 16:04:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:48.665 16:04:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:48.665 [2024-07-15 16:04:24.506344] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:12:48.665 [2024-07-15 16:04:24.506387] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2197393 ] 00:12:48.927 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.927 [2024-07-15 16:04:24.538652] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:48.927 [2024-07-15 16:04:24.547336] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:48.927 [2024-07-15 16:04:24.547357] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc41bc30000 00:12:48.927 [2024-07-15 16:04:24.548333] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:48.927 [2024-07-15 16:04:24.549343] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:48.927 [2024-07-15 16:04:24.550346] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:48.927 [2024-07-15 16:04:24.551350] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:48.928 [2024-07-15 16:04:24.552359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:48.928 [2024-07-15 16:04:24.553367] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:48.928 [2024-07-15 16:04:24.554373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:48.928 [2024-07-15 16:04:24.555377] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:48.928 [2024-07-15 16:04:24.556384] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:48.928 [2024-07-15 16:04:24.556393] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc41bc25000 00:12:48.928 [2024-07-15 16:04:24.557716] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:48.928 [2024-07-15 16:04:24.573914] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:48.928 [2024-07-15 16:04:24.573938] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:48.928 [2024-07-15 16:04:24.576003] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:48.928 [2024-07-15 16:04:24.576045] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:48.928 [2024-07-15 16:04:24.576128] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:48.928 [2024-07-15 16:04:24.576146] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:48.928 [2024-07-15 16:04:24.576151] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:48.928 [2024-07-15 16:04:24.577006] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:48.928 [2024-07-15 16:04:24.577014] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:48.928 [2024-07-15 16:04:24.577021] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:48.928 [2024-07-15 16:04:24.578008] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:48.928 [2024-07-15 16:04:24.578017] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:48.928 [2024-07-15 16:04:24.578025] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:48.928 [2024-07-15 16:04:24.581128] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:48.928 [2024-07-15 16:04:24.581136] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:48.928 [2024-07-15 16:04:24.582030] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:48.928 [2024-07-15 16:04:24.582039] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:48.928 [2024-07-15 16:04:24.582043] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:48.928 [2024-07-15 16:04:24.582050] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:48.928 [2024-07-15 16:04:24.582155] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:48.928 [2024-07-15 16:04:24.582160] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:48.928 [2024-07-15 16:04:24.582165] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:48.928 [2024-07-15 16:04:24.583038] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:48.928 [2024-07-15 16:04:24.584048] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:48.928 [2024-07-15 16:04:24.585062] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:48.928 [2024-07-15 16:04:24.586060] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:48.928 [2024-07-15 16:04:24.586098] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:48.928 [2024-07-15 16:04:24.587079] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:48.928 [2024-07-15 16:04:24.587088] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:48.928 [2024-07-15 16:04:24.587095] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:48.928 [2024-07-15 16:04:24.587116] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:48.928 [2024-07-15 16:04:24.587126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:48.928 [2024-07-15 16:04:24.587138] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:48.928 [2024-07-15 16:04:24.587143] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:48.928 [2024-07-15 16:04:24.587155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:48.928 [2024-07-15 16:04:24.592130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:48.928 [2024-07-15 16:04:24.592141] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:48.928 [2024-07-15 16:04:24.592148] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:48.928 [2024-07-15 16:04:24.592152] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:48.928 [2024-07-15 16:04:24.592157] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:48.928 [2024-07-15 16:04:24.592161] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:48.928 [2024-07-15 16:04:24.592166] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:48.928 [2024-07-15 16:04:24.592170] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:48.928 [2024-07-15 16:04:24.592177] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:48.928 [2024-07-15 16:04:24.592187] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:48.928 [2024-07-15 16:04:24.600127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:48.928 [2024-07-15 16:04:24.600141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:48.928 [2024-07-15 16:04:24.600150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:48.928 [2024-07-15 16:04:24.600158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:48.928 [2024-07-15 16:04:24.600167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:48.928 [2024-07-15 16:04:24.600171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:48.928 [2024-07-15 16:04:24.600179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:48.928 [2024-07-15 16:04:24.600188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:48.928 [2024-07-15 16:04:24.608129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:48.928 [2024-07-15 16:04:24.608136] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:48.928 [2024-07-15 16:04:24.608143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:48.928 [2024-07-15 16:04:24.608150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:48.928 [2024-07-15 16:04:24.608155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:48.928 [2024-07-15 16:04:24.608164] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:48.928 [2024-07-15 16:04:24.616135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:48.928 [2024-07-15 16:04:24.616199] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:48.928 [2024-07-15 16:04:24.616207] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:48.928 [2024-07-15 16:04:24.616214] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:48.928 [2024-07-15 16:04:24.616219] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:48.928 [2024-07-15 16:04:24.616225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:48.928 [2024-07-15 16:04:24.624127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:48.928 [2024-07-15 16:04:24.624137] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:48.928 [2024-07-15 16:04:24.624149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:48.928 [2024-07-15 16:04:24.624157] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:48.928 [2024-07-15 16:04:24.624164] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:48.928 [2024-07-15 16:04:24.624168] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:48.928 [2024-07-15 16:04:24.624174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:48.928 [2024-07-15 16:04:24.632128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:48.928 [2024-07-15 16:04:24.632142] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:48.928 [2024-07-15 16:04:24.632149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:48.928 [2024-07-15 16:04:24.632156] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:48.928 [2024-07-15 16:04:24.632161] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:48.928 [2024-07-15 16:04:24.632167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:48.929 [2024-07-15 16:04:24.640129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:48.929 [2024-07-15 16:04:24.640147] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:48.929 [2024-07-15 16:04:24.640156] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:48.929 [2024-07-15 16:04:24.640164] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:48.929 [2024-07-15 16:04:24.640169] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:48.929 [2024-07-15 16:04:24.640174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:48.929 [2024-07-15 16:04:24.640179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:48.929 [2024-07-15 16:04:24.640184] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:48.929 [2024-07-15 16:04:24.640188] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:48.929 [2024-07-15 16:04:24.640193] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:48.929 [2024-07-15 16:04:24.640209] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:48.929 [2024-07-15 16:04:24.648128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:48.929 [2024-07-15 16:04:24.648142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:48.929 [2024-07-15 16:04:24.656128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:48.929 [2024-07-15 16:04:24.656141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:48.929 [2024-07-15 16:04:24.664127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:48.929 [2024-07-15 16:04:24.664140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:48.929 [2024-07-15 16:04:24.672128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:48.929 [2024-07-15 16:04:24.672144] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:48.929 [2024-07-15 16:04:24.672149] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:48.929 [2024-07-15 16:04:24.672152] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:48.929 [2024-07-15 16:04:24.672156] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:48.929 [2024-07-15 16:04:24.672162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:48.929 [2024-07-15 16:04:24.672170] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:48.929 [2024-07-15 16:04:24.672174] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:48.929 [2024-07-15 16:04:24.672180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:48.929 [2024-07-15 16:04:24.672187] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:48.929 [2024-07-15 16:04:24.672192] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:48.929 [2024-07-15 16:04:24.672197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:48.929 [2024-07-15 16:04:24.672207] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:48.929 [2024-07-15 16:04:24.672211] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:48.929 [2024-07-15 16:04:24.672217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:48.929 [2024-07-15 16:04:24.680128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:48.929 [2024-07-15 16:04:24.680142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:48.929 [2024-07-15 16:04:24.680153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:48.929 [2024-07-15 16:04:24.680160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:48.929 ===================================================== 00:12:48.929 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:48.929 ===================================================== 00:12:48.929 Controller Capabilities/Features 00:12:48.929 ================================ 00:12:48.929 Vendor ID: 4e58 00:12:48.929 Subsystem Vendor ID: 4e58 00:12:48.929 Serial Number: SPDK2 00:12:48.929 Model Number: SPDK bdev Controller 00:12:48.929 Firmware Version: 24.09 00:12:48.929 Recommended Arb Burst: 6 00:12:48.929 IEEE OUI Identifier: 8d 6b 50 00:12:48.929 Multi-path I/O 00:12:48.929 May have multiple subsystem ports: Yes 00:12:48.929 May have multiple controllers: Yes 00:12:48.929 Associated with SR-IOV VF: No 00:12:48.929 Max Data Transfer Size: 131072 00:12:48.929 Max Number of Namespaces: 32 00:12:48.929 Max Number of I/O Queues: 127 00:12:48.929 NVMe Specification Version (VS): 1.3 00:12:48.929 NVMe Specification Version (Identify): 1.3 00:12:48.929 Maximum Queue Entries: 256 00:12:48.929 Contiguous Queues Required: Yes 00:12:48.929 Arbitration Mechanisms Supported 00:12:48.929 Weighted Round Robin: Not Supported 00:12:48.929 Vendor Specific: Not Supported 00:12:48.929 Reset Timeout: 15000 ms 00:12:48.929 Doorbell Stride: 4 bytes 00:12:48.929 NVM Subsystem Reset: Not Supported 00:12:48.929 Command Sets Supported 00:12:48.929 NVM Command Set: Supported 00:12:48.929 Boot Partition: Not Supported 00:12:48.929 Memory Page Size Minimum: 4096 bytes 00:12:48.929 Memory Page Size Maximum: 4096 bytes 00:12:48.929 Persistent Memory Region: Not Supported 00:12:48.929 Optional Asynchronous Events Supported 00:12:48.929 Namespace Attribute Notices: Supported 00:12:48.929 Firmware Activation Notices: Not Supported 00:12:48.929 ANA Change Notices: Not Supported 00:12:48.929 PLE Aggregate Log Change Notices: Not Supported 00:12:48.929 LBA Status Info Alert Notices: Not Supported 00:12:48.929 EGE Aggregate Log Change Notices: Not Supported 00:12:48.929 Normal NVM Subsystem Shutdown event: Not Supported 00:12:48.929 Zone Descriptor Change Notices: Not Supported 00:12:48.929 Discovery Log Change Notices: Not Supported 00:12:48.929 Controller Attributes 00:12:48.929 128-bit Host Identifier: Supported 00:12:48.929 Non-Operational Permissive Mode: Not Supported 00:12:48.929 NVM Sets: Not Supported 00:12:48.929 Read Recovery Levels: Not Supported 00:12:48.929 Endurance Groups: Not Supported 00:12:48.929 Predictable Latency Mode: Not Supported 00:12:48.929 Traffic Based Keep ALive: Not Supported 00:12:48.929 Namespace Granularity: Not Supported 00:12:48.929 SQ Associations: Not Supported 00:12:48.929 UUID List: Not Supported 00:12:48.929 Multi-Domain Subsystem: Not Supported 00:12:48.929 Fixed Capacity Management: Not Supported 00:12:48.929 Variable Capacity Management: Not Supported 00:12:48.929 Delete Endurance Group: Not Supported 00:12:48.929 Delete NVM Set: Not Supported 00:12:48.929 Extended LBA Formats Supported: Not Supported 00:12:48.929 Flexible Data Placement Supported: Not Supported 00:12:48.929 00:12:48.929 Controller Memory Buffer Support 00:12:48.929 ================================ 00:12:48.929 Supported: No 00:12:48.929 00:12:48.929 Persistent Memory Region Support 00:12:48.929 ================================ 00:12:48.929 Supported: No 00:12:48.929 00:12:48.929 Admin Command Set Attributes 00:12:48.929 ============================ 00:12:48.929 Security Send/Receive: Not Supported 00:12:48.929 Format NVM: Not Supported 00:12:48.929 Firmware Activate/Download: Not Supported 00:12:48.929 Namespace Management: Not Supported 00:12:48.929 Device Self-Test: Not Supported 00:12:48.929 Directives: Not Supported 00:12:48.929 NVMe-MI: Not Supported 00:12:48.929 Virtualization Management: Not Supported 00:12:48.929 Doorbell Buffer Config: Not Supported 00:12:48.929 Get LBA Status Capability: Not Supported 00:12:48.929 Command & Feature Lockdown Capability: Not Supported 00:12:48.929 Abort Command Limit: 4 00:12:48.929 Async Event Request Limit: 4 00:12:48.929 Number of Firmware Slots: N/A 00:12:48.929 Firmware Slot 1 Read-Only: N/A 00:12:48.929 Firmware Activation Without Reset: N/A 00:12:48.929 Multiple Update Detection Support: N/A 00:12:48.929 Firmware Update Granularity: No Information Provided 00:12:48.929 Per-Namespace SMART Log: No 00:12:48.929 Asymmetric Namespace Access Log Page: Not Supported 00:12:48.929 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:48.929 Command Effects Log Page: Supported 00:12:48.929 Get Log Page Extended Data: Supported 00:12:48.929 Telemetry Log Pages: Not Supported 00:12:48.929 Persistent Event Log Pages: Not Supported 00:12:48.929 Supported Log Pages Log Page: May Support 00:12:48.929 Commands Supported & Effects Log Page: Not Supported 00:12:48.929 Feature Identifiers & Effects Log Page:May Support 00:12:48.929 NVMe-MI Commands & Effects Log Page: May Support 00:12:48.929 Data Area 4 for Telemetry Log: Not Supported 00:12:48.929 Error Log Page Entries Supported: 128 00:12:48.929 Keep Alive: Supported 00:12:48.929 Keep Alive Granularity: 10000 ms 00:12:48.929 00:12:48.929 NVM Command Set Attributes 00:12:48.929 ========================== 00:12:48.929 Submission Queue Entry Size 00:12:48.929 Max: 64 00:12:48.929 Min: 64 00:12:48.929 Completion Queue Entry Size 00:12:48.929 Max: 16 00:12:48.929 Min: 16 00:12:48.929 Number of Namespaces: 32 00:12:48.929 Compare Command: Supported 00:12:48.929 Write Uncorrectable Command: Not Supported 00:12:48.929 Dataset Management Command: Supported 00:12:48.930 Write Zeroes Command: Supported 00:12:48.930 Set Features Save Field: Not Supported 00:12:48.930 Reservations: Not Supported 00:12:48.930 Timestamp: Not Supported 00:12:48.930 Copy: Supported 00:12:48.930 Volatile Write Cache: Present 00:12:48.930 Atomic Write Unit (Normal): 1 00:12:48.930 Atomic Write Unit (PFail): 1 00:12:48.930 Atomic Compare & Write Unit: 1 00:12:48.930 Fused Compare & Write: Supported 00:12:48.930 Scatter-Gather List 00:12:48.930 SGL Command Set: Supported (Dword aligned) 00:12:48.930 SGL Keyed: Not Supported 00:12:48.930 SGL Bit Bucket Descriptor: Not Supported 00:12:48.930 SGL Metadata Pointer: Not Supported 00:12:48.930 Oversized SGL: Not Supported 00:12:48.930 SGL Metadata Address: Not Supported 00:12:48.930 SGL Offset: Not Supported 00:12:48.930 Transport SGL Data Block: Not Supported 00:12:48.930 Replay Protected Memory Block: Not Supported 00:12:48.930 00:12:48.930 Firmware Slot Information 00:12:48.930 ========================= 00:12:48.930 Active slot: 1 00:12:48.930 Slot 1 Firmware Revision: 24.09 00:12:48.930 00:12:48.930 00:12:48.930 Commands Supported and Effects 00:12:48.930 ============================== 00:12:48.930 Admin Commands 00:12:48.930 -------------- 00:12:48.930 Get Log Page (02h): Supported 00:12:48.930 Identify (06h): Supported 00:12:48.930 Abort (08h): Supported 00:12:48.930 Set Features (09h): Supported 00:12:48.930 Get Features (0Ah): Supported 00:12:48.930 Asynchronous Event Request (0Ch): Supported 00:12:48.930 Keep Alive (18h): Supported 00:12:48.930 I/O Commands 00:12:48.930 ------------ 00:12:48.930 Flush (00h): Supported LBA-Change 00:12:48.930 Write (01h): Supported LBA-Change 00:12:48.930 Read (02h): Supported 00:12:48.930 Compare (05h): Supported 00:12:48.930 Write Zeroes (08h): Supported LBA-Change 00:12:48.930 Dataset Management (09h): Supported LBA-Change 00:12:48.930 Copy (19h): Supported LBA-Change 00:12:48.930 00:12:48.930 Error Log 00:12:48.930 ========= 00:12:48.930 00:12:48.930 Arbitration 00:12:48.930 =========== 00:12:48.930 Arbitration Burst: 1 00:12:48.930 00:12:48.930 Power Management 00:12:48.930 ================ 00:12:48.930 Number of Power States: 1 00:12:48.930 Current Power State: Power State #0 00:12:48.930 Power State #0: 00:12:48.930 Max Power: 0.00 W 00:12:48.930 Non-Operational State: Operational 00:12:48.930 Entry Latency: Not Reported 00:12:48.930 Exit Latency: Not Reported 00:12:48.930 Relative Read Throughput: 0 00:12:48.930 Relative Read Latency: 0 00:12:48.930 Relative Write Throughput: 0 00:12:48.930 Relative Write Latency: 0 00:12:48.930 Idle Power: Not Reported 00:12:48.930 Active Power: Not Reported 00:12:48.930 Non-Operational Permissive Mode: Not Supported 00:12:48.930 00:12:48.930 Health Information 00:12:48.930 ================== 00:12:48.930 Critical Warnings: 00:12:48.930 Available Spare Space: OK 00:12:48.930 Temperature: OK 00:12:48.930 Device Reliability: OK 00:12:48.930 Read Only: No 00:12:48.930 Volatile Memory Backup: OK 00:12:48.930 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:48.930 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:48.930 Available Spare: 0% 00:12:48.930 Available Sp[2024-07-15 16:04:24.680258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:48.930 [2024-07-15 16:04:24.688129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:48.930 [2024-07-15 16:04:24.688161] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:48.930 [2024-07-15 16:04:24.688170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:48.930 [2024-07-15 16:04:24.688177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:48.930 [2024-07-15 16:04:24.688183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:48.930 [2024-07-15 16:04:24.688189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:48.930 [2024-07-15 16:04:24.688240] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:48.930 [2024-07-15 16:04:24.688251] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:48.930 [2024-07-15 16:04:24.689240] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:48.930 [2024-07-15 16:04:24.689288] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:48.930 [2024-07-15 16:04:24.689295] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:48.930 [2024-07-15 16:04:24.690244] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:48.930 [2024-07-15 16:04:24.690256] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:48.930 [2024-07-15 16:04:24.690302] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:48.930 [2024-07-15 16:04:24.693128] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:48.930 are Threshold: 0% 00:12:48.930 Life Percentage Used: 0% 00:12:48.930 Data Units Read: 0 00:12:48.930 Data Units Written: 0 00:12:48.930 Host Read Commands: 0 00:12:48.930 Host Write Commands: 0 00:12:48.930 Controller Busy Time: 0 minutes 00:12:48.930 Power Cycles: 0 00:12:48.930 Power On Hours: 0 hours 00:12:48.930 Unsafe Shutdowns: 0 00:12:48.930 Unrecoverable Media Errors: 0 00:12:48.930 Lifetime Error Log Entries: 0 00:12:48.930 Warning Temperature Time: 0 minutes 00:12:48.930 Critical Temperature Time: 0 minutes 00:12:48.930 00:12:48.930 Number of Queues 00:12:48.930 ================ 00:12:48.930 Number of I/O Submission Queues: 127 00:12:48.930 Number of I/O Completion Queues: 127 00:12:48.930 00:12:48.930 Active Namespaces 00:12:48.930 ================= 00:12:48.930 Namespace ID:1 00:12:48.930 Error Recovery Timeout: Unlimited 00:12:48.930 Command Set Identifier: NVM (00h) 00:12:48.930 Deallocate: Supported 00:12:48.930 Deallocated/Unwritten Error: Not Supported 00:12:48.930 Deallocated Read Value: Unknown 00:12:48.930 Deallocate in Write Zeroes: Not Supported 00:12:48.930 Deallocated Guard Field: 0xFFFF 00:12:48.930 Flush: Supported 00:12:48.930 Reservation: Supported 00:12:48.930 Namespace Sharing Capabilities: Multiple Controllers 00:12:48.930 Size (in LBAs): 131072 (0GiB) 00:12:48.930 Capacity (in LBAs): 131072 (0GiB) 00:12:48.930 Utilization (in LBAs): 131072 (0GiB) 00:12:48.930 NGUID: ED12C6415E074027BB9BEC7DE693214D 00:12:48.930 UUID: ed12c641-5e07-4027-bb9b-ec7de693214d 00:12:48.930 Thin Provisioning: Not Supported 00:12:48.930 Per-NS Atomic Units: Yes 00:12:48.930 Atomic Boundary Size (Normal): 0 00:12:48.930 Atomic Boundary Size (PFail): 0 00:12:48.930 Atomic Boundary Offset: 0 00:12:48.930 Maximum Single Source Range Length: 65535 00:12:48.930 Maximum Copy Length: 65535 00:12:48.930 Maximum Source Range Count: 1 00:12:48.930 NGUID/EUI64 Never Reused: No 00:12:48.930 Namespace Write Protected: No 00:12:48.930 Number of LBA Formats: 1 00:12:48.930 Current LBA Format: LBA Format #00 00:12:48.930 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:48.930 00:12:48.930 16:04:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:49.190 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.190 [2024-07-15 16:04:24.876096] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:54.475 Initializing NVMe Controllers 00:12:54.475 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:54.475 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:54.475 Initialization complete. Launching workers. 00:12:54.475 ======================================================== 00:12:54.475 Latency(us) 00:12:54.475 Device Information : IOPS MiB/s Average min max 00:12:54.475 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39997.00 156.24 3202.62 826.75 6826.59 00:12:54.475 ======================================================== 00:12:54.475 Total : 39997.00 156.24 3202.62 826.75 6826.59 00:12:54.475 00:12:54.475 [2024-07-15 16:04:29.981307] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:54.475 16:04:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:54.475 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.475 [2024-07-15 16:04:30.157889] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:59.761 Initializing NVMe Controllers 00:12:59.761 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:59.761 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:59.761 Initialization complete. Launching workers. 00:12:59.761 ======================================================== 00:12:59.761 Latency(us) 00:12:59.761 Device Information : IOPS MiB/s Average min max 00:12:59.761 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36182.04 141.34 3537.44 1090.97 7398.68 00:12:59.761 ======================================================== 00:12:59.761 Total : 36182.04 141.34 3537.44 1090.97 7398.68 00:12:59.761 00:12:59.761 [2024-07-15 16:04:35.178901] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:59.761 16:04:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:59.761 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.761 [2024-07-15 16:04:35.369511] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:05.086 [2024-07-15 16:04:40.515198] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:05.086 Initializing NVMe Controllers 00:13:05.086 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:05.086 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:05.086 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:05.086 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:05.086 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:05.086 Initialization complete. Launching workers. 00:13:05.086 Starting thread on core 2 00:13:05.086 Starting thread on core 3 00:13:05.086 Starting thread on core 1 00:13:05.086 16:04:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:05.086 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.086 [2024-07-15 16:04:40.777555] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:09.290 [2024-07-15 16:04:44.608673] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:09.290 Initializing NVMe Controllers 00:13:09.290 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:09.290 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:09.290 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:09.290 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:09.290 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:09.290 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:09.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:09.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:09.290 Initialization complete. Launching workers. 00:13:09.290 Starting thread on core 1 with urgent priority queue 00:13:09.290 Starting thread on core 2 with urgent priority queue 00:13:09.290 Starting thread on core 3 with urgent priority queue 00:13:09.290 Starting thread on core 0 with urgent priority queue 00:13:09.290 SPDK bdev Controller (SPDK2 ) core 0: 6521.33 IO/s 15.33 secs/100000 ios 00:13:09.290 SPDK bdev Controller (SPDK2 ) core 1: 13759.67 IO/s 7.27 secs/100000 ios 00:13:09.290 SPDK bdev Controller (SPDK2 ) core 2: 7452.33 IO/s 13.42 secs/100000 ios 00:13:09.290 SPDK bdev Controller (SPDK2 ) core 3: 9943.00 IO/s 10.06 secs/100000 ios 00:13:09.290 ======================================================== 00:13:09.290 00:13:09.290 16:04:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:09.290 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.290 [2024-07-15 16:04:44.873508] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:09.290 Initializing NVMe Controllers 00:13:09.290 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:09.290 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:09.290 Namespace ID: 1 size: 0GB 00:13:09.290 Initialization complete. 00:13:09.290 INFO: using host memory buffer for IO 00:13:09.290 Hello world! 00:13:09.290 [2024-07-15 16:04:44.883574] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:09.290 16:04:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:09.290 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.551 [2024-07-15 16:04:45.142117] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:10.493 Initializing NVMe Controllers 00:13:10.493 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:10.493 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:10.493 Initialization complete. Launching workers. 00:13:10.493 submit (in ns) avg, min, max = 8752.0, 3895.0, 4000818.3 00:13:10.493 complete (in ns) avg, min, max = 17614.8, 2387.5, 4994572.5 00:13:10.493 00:13:10.493 Submit histogram 00:13:10.493 ================ 00:13:10.493 Range in us Cumulative Count 00:13:10.493 3.893 - 3.920: 1.2830% ( 251) 00:13:10.493 3.920 - 3.947: 7.8000% ( 1275) 00:13:10.493 3.947 - 3.973: 16.9955% ( 1799) 00:13:10.493 3.973 - 4.000: 28.3838% ( 2228) 00:13:10.493 4.000 - 4.027: 37.9268% ( 1867) 00:13:10.493 4.027 - 4.053: 48.8550% ( 2138) 00:13:10.493 4.053 - 4.080: 66.4077% ( 3434) 00:13:10.493 4.080 - 4.107: 80.7759% ( 2811) 00:13:10.493 4.107 - 4.133: 91.2543% ( 2050) 00:13:10.493 4.133 - 4.160: 96.7696% ( 1079) 00:13:10.493 4.160 - 4.187: 98.6915% ( 376) 00:13:10.493 4.187 - 4.213: 99.2486% ( 109) 00:13:10.493 4.213 - 4.240: 99.3611% ( 22) 00:13:10.493 4.240 - 4.267: 99.3969% ( 7) 00:13:10.493 4.267 - 4.293: 99.4224% ( 5) 00:13:10.493 4.293 - 4.320: 99.4275% ( 1) 00:13:10.493 4.347 - 4.373: 99.4377% ( 2) 00:13:10.493 4.373 - 4.400: 99.4429% ( 1) 00:13:10.493 4.400 - 4.427: 99.4480% ( 1) 00:13:10.493 4.427 - 4.453: 99.4582% ( 2) 00:13:10.493 4.640 - 4.667: 99.4633% ( 1) 00:13:10.493 4.720 - 4.747: 99.4684% ( 1) 00:13:10.493 4.773 - 4.800: 99.4786% ( 2) 00:13:10.493 4.800 - 4.827: 99.4837% ( 1) 00:13:10.493 4.933 - 4.960: 99.4940% ( 2) 00:13:10.493 5.067 - 5.093: 99.4991% ( 1) 00:13:10.493 5.093 - 5.120: 99.5042% ( 1) 00:13:10.493 5.147 - 5.173: 99.5093% ( 1) 00:13:10.493 5.173 - 5.200: 99.5144% ( 1) 00:13:10.493 5.413 - 5.440: 99.5195% ( 1) 00:13:10.493 5.547 - 5.573: 99.5246% ( 1) 00:13:10.493 5.653 - 5.680: 99.5297% ( 1) 00:13:10.493 5.787 - 5.813: 99.5400% ( 2) 00:13:10.493 5.840 - 5.867: 99.5451% ( 1) 00:13:10.493 5.867 - 5.893: 99.5502% ( 1) 00:13:10.493 5.947 - 5.973: 99.5604% ( 2) 00:13:10.493 5.973 - 6.000: 99.5655% ( 1) 00:13:10.493 6.000 - 6.027: 99.5758% ( 2) 00:13:10.493 6.027 - 6.053: 99.5860% ( 2) 00:13:10.493 6.053 - 6.080: 99.5962% ( 2) 00:13:10.493 6.080 - 6.107: 99.6064% ( 2) 00:13:10.493 6.107 - 6.133: 99.6115% ( 1) 00:13:10.493 6.133 - 6.160: 99.6166% ( 1) 00:13:10.493 6.160 - 6.187: 99.6269% ( 2) 00:13:10.493 6.187 - 6.213: 99.6371% ( 2) 00:13:10.493 6.213 - 6.240: 99.6422% ( 1) 00:13:10.493 6.240 - 6.267: 99.6626% ( 4) 00:13:10.493 6.267 - 6.293: 99.6780% ( 3) 00:13:10.493 6.293 - 6.320: 99.6831% ( 1) 00:13:10.493 6.320 - 6.347: 99.6984% ( 3) 00:13:10.493 6.347 - 6.373: 99.7086% ( 2) 00:13:10.493 6.400 - 6.427: 99.7138% ( 1) 00:13:10.493 6.453 - 6.480: 99.7189% ( 1) 00:13:10.493 6.480 - 6.507: 99.7342% ( 3) 00:13:10.493 6.507 - 6.533: 99.7393% ( 1) 00:13:10.493 6.560 - 6.587: 99.7495% ( 2) 00:13:10.493 6.613 - 6.640: 99.7649% ( 3) 00:13:10.493 6.640 - 6.667: 99.7700% ( 1) 00:13:10.493 6.667 - 6.693: 99.7751% ( 1) 00:13:10.493 6.773 - 6.800: 99.7853% ( 2) 00:13:10.493 6.827 - 6.880: 99.8007% ( 3) 00:13:10.493 6.880 - 6.933: 99.8160% ( 3) 00:13:10.493 7.040 - 7.093: 99.8211% ( 1) 00:13:10.493 7.093 - 7.147: 99.8262% ( 1) 00:13:10.493 7.200 - 7.253: 99.8313% ( 1) 00:13:10.493 7.413 - 7.467: 99.8415% ( 2) 00:13:10.493 7.627 - 7.680: 99.8467% ( 1) 00:13:10.493 7.787 - 7.840: 99.8518% ( 1) 00:13:10.493 7.893 - 7.947: 99.8569% ( 1) 00:13:10.493 8.053 - 8.107: 99.8620% ( 1) 00:13:10.493 8.267 - 8.320: 99.8722% ( 2) 00:13:10.493 11.947 - 12.000: 99.8773% ( 1) 00:13:10.493 13.493 - 13.547: 99.8824% ( 1) 00:13:10.493 3986.773 - 4014.080: 100.0000% ( 23) 00:13:10.493 00:13:10.493 Complete histogram 00:13:10.493 ================== 00:13:10.493 Range in us Cumulative Count 00:13:10.493 2.387 - 2.400: 0.0716% ( 14) 00:13:10.494 2.400 - 2.413: 1.0734% ( 196) 00:13:10.494 2.413 - 2.427: 1.1347% ( 12) 00:13:10.494 2.427 - [2024-07-15 16:04:46.238797] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:10.494 2.440: 1.2779% ( 28) 00:13:10.494 2.440 - 2.453: 47.9043% ( 9122) 00:13:10.494 2.453 - 2.467: 57.4882% ( 1875) 00:13:10.494 2.467 - 2.480: 71.1869% ( 2680) 00:13:10.494 2.480 - 2.493: 78.8540% ( 1500) 00:13:10.494 2.493 - 2.507: 81.3842% ( 495) 00:13:10.494 2.507 - 2.520: 84.8548% ( 679) 00:13:10.494 2.520 - 2.533: 90.4825% ( 1101) 00:13:10.494 2.533 - 2.547: 95.1748% ( 918) 00:13:10.494 2.547 - 2.560: 97.1376% ( 384) 00:13:10.494 2.560 - 2.573: 98.5535% ( 277) 00:13:10.494 2.573 - 2.587: 99.2793% ( 142) 00:13:10.494 2.587 - 2.600: 99.3764% ( 19) 00:13:10.494 2.600 - 2.613: 99.4122% ( 7) 00:13:10.494 2.613 - 2.627: 99.4173% ( 1) 00:13:10.494 2.667 - 2.680: 99.4224% ( 1) 00:13:10.494 2.707 - 2.720: 99.4275% ( 1) 00:13:10.494 2.747 - 2.760: 99.4326% ( 1) 00:13:10.494 4.053 - 4.080: 99.4377% ( 1) 00:13:10.494 4.187 - 4.213: 99.4429% ( 1) 00:13:10.494 4.267 - 4.293: 99.4531% ( 2) 00:13:10.494 4.293 - 4.320: 99.4582% ( 1) 00:13:10.494 4.427 - 4.453: 99.4684% ( 2) 00:13:10.494 4.507 - 4.533: 99.4786% ( 2) 00:13:10.494 4.667 - 4.693: 99.4940% ( 3) 00:13:10.494 4.720 - 4.747: 99.4991% ( 1) 00:13:10.494 4.773 - 4.800: 99.5042% ( 1) 00:13:10.494 4.800 - 4.827: 99.5093% ( 1) 00:13:10.494 4.827 - 4.853: 99.5195% ( 2) 00:13:10.494 4.880 - 4.907: 99.5246% ( 1) 00:13:10.494 4.907 - 4.933: 99.5349% ( 2) 00:13:10.494 4.960 - 4.987: 99.5400% ( 1) 00:13:10.494 5.013 - 5.040: 99.5451% ( 1) 00:13:10.494 5.147 - 5.173: 99.5502% ( 1) 00:13:10.494 5.227 - 5.253: 99.5553% ( 1) 00:13:10.494 5.280 - 5.307: 99.5604% ( 1) 00:13:10.494 5.307 - 5.333: 99.5655% ( 1) 00:13:10.494 5.360 - 5.387: 99.5706% ( 1) 00:13:10.494 5.467 - 5.493: 99.5809% ( 2) 00:13:10.494 5.493 - 5.520: 99.5860% ( 1) 00:13:10.494 5.547 - 5.573: 99.5911% ( 1) 00:13:10.494 5.707 - 5.733: 99.5962% ( 1) 00:13:10.494 6.587 - 6.613: 99.6013% ( 1) 00:13:10.494 10.240 - 10.293: 99.6064% ( 1) 00:13:10.494 11.413 - 11.467: 99.6115% ( 1) 00:13:10.494 13.653 - 13.760: 99.6166% ( 1) 00:13:10.494 53.547 - 53.760: 99.6218% ( 1) 00:13:10.494 3017.387 - 3031.040: 99.6269% ( 1) 00:13:10.494 3031.040 - 3044.693: 99.6320% ( 1) 00:13:10.494 3072.000 - 3085.653: 99.6371% ( 1) 00:13:10.494 3358.720 - 3372.373: 99.6422% ( 1) 00:13:10.494 3986.773 - 4014.080: 99.9796% ( 66) 00:13:10.494 4969.813 - 4997.120: 100.0000% ( 4) 00:13:10.494 00:13:10.494 16:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:10.494 16:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:10.494 16:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:10.494 16:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:10.494 16:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:10.755 [ 00:13:10.755 { 00:13:10.755 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:10.755 "subtype": "Discovery", 00:13:10.755 "listen_addresses": [], 00:13:10.755 "allow_any_host": true, 00:13:10.755 "hosts": [] 00:13:10.755 }, 00:13:10.755 { 00:13:10.755 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:10.755 "subtype": "NVMe", 00:13:10.755 "listen_addresses": [ 00:13:10.755 { 00:13:10.755 "trtype": "VFIOUSER", 00:13:10.755 "adrfam": "IPv4", 00:13:10.755 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:10.755 "trsvcid": "0" 00:13:10.755 } 00:13:10.755 ], 00:13:10.755 "allow_any_host": true, 00:13:10.755 "hosts": [], 00:13:10.755 "serial_number": "SPDK1", 00:13:10.755 "model_number": "SPDK bdev Controller", 00:13:10.755 "max_namespaces": 32, 00:13:10.755 "min_cntlid": 1, 00:13:10.755 "max_cntlid": 65519, 00:13:10.755 "namespaces": [ 00:13:10.755 { 00:13:10.755 "nsid": 1, 00:13:10.755 "bdev_name": "Malloc1", 00:13:10.755 "name": "Malloc1", 00:13:10.755 "nguid": "4AB68AA5A89341268E02EC0F634D1D1E", 00:13:10.755 "uuid": "4ab68aa5-a893-4126-8e02-ec0f634d1d1e" 00:13:10.755 }, 00:13:10.755 { 00:13:10.755 "nsid": 2, 00:13:10.755 "bdev_name": "Malloc3", 00:13:10.755 "name": "Malloc3", 00:13:10.755 "nguid": "61F832A627084B0391D3293CDF544977", 00:13:10.755 "uuid": "61f832a6-2708-4b03-91d3-293cdf544977" 00:13:10.755 } 00:13:10.755 ] 00:13:10.755 }, 00:13:10.755 { 00:13:10.755 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:10.755 "subtype": "NVMe", 00:13:10.755 "listen_addresses": [ 00:13:10.755 { 00:13:10.755 "trtype": "VFIOUSER", 00:13:10.755 "adrfam": "IPv4", 00:13:10.755 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:10.755 "trsvcid": "0" 00:13:10.755 } 00:13:10.755 ], 00:13:10.755 "allow_any_host": true, 00:13:10.755 "hosts": [], 00:13:10.755 "serial_number": "SPDK2", 00:13:10.755 "model_number": "SPDK bdev Controller", 00:13:10.755 "max_namespaces": 32, 00:13:10.755 "min_cntlid": 1, 00:13:10.755 "max_cntlid": 65519, 00:13:10.755 "namespaces": [ 00:13:10.755 { 00:13:10.755 "nsid": 1, 00:13:10.755 "bdev_name": "Malloc2", 00:13:10.755 "name": "Malloc2", 00:13:10.755 "nguid": "ED12C6415E074027BB9BEC7DE693214D", 00:13:10.755 "uuid": "ed12c641-5e07-4027-bb9b-ec7de693214d" 00:13:10.755 } 00:13:10.755 ] 00:13:10.755 } 00:13:10.755 ] 00:13:10.755 16:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:10.755 16:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2201685 00:13:10.755 16:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:10.755 16:04:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:10.755 16:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:10.755 16:04:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:10.755 16:04:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:10.755 16:04:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:10.755 16:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:10.755 16:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:10.755 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.016 Malloc4 00:13:11.016 [2024-07-15 16:04:46.624401] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:11.016 16:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:11.016 [2024-07-15 16:04:46.794482] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:11.016 16:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:11.016 Asynchronous Event Request test 00:13:11.016 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.016 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.016 Registering asynchronous event callbacks... 00:13:11.016 Starting namespace attribute notice tests for all controllers... 00:13:11.016 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:11.016 aer_cb - Changed Namespace 00:13:11.016 Cleaning up... 00:13:11.277 [ 00:13:11.277 { 00:13:11.277 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:11.277 "subtype": "Discovery", 00:13:11.277 "listen_addresses": [], 00:13:11.277 "allow_any_host": true, 00:13:11.277 "hosts": [] 00:13:11.277 }, 00:13:11.277 { 00:13:11.277 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:11.277 "subtype": "NVMe", 00:13:11.277 "listen_addresses": [ 00:13:11.277 { 00:13:11.277 "trtype": "VFIOUSER", 00:13:11.277 "adrfam": "IPv4", 00:13:11.277 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:11.277 "trsvcid": "0" 00:13:11.277 } 00:13:11.277 ], 00:13:11.277 "allow_any_host": true, 00:13:11.277 "hosts": [], 00:13:11.277 "serial_number": "SPDK1", 00:13:11.277 "model_number": "SPDK bdev Controller", 00:13:11.277 "max_namespaces": 32, 00:13:11.277 "min_cntlid": 1, 00:13:11.277 "max_cntlid": 65519, 00:13:11.277 "namespaces": [ 00:13:11.277 { 00:13:11.277 "nsid": 1, 00:13:11.277 "bdev_name": "Malloc1", 00:13:11.277 "name": "Malloc1", 00:13:11.277 "nguid": "4AB68AA5A89341268E02EC0F634D1D1E", 00:13:11.277 "uuid": "4ab68aa5-a893-4126-8e02-ec0f634d1d1e" 00:13:11.277 }, 00:13:11.277 { 00:13:11.277 "nsid": 2, 00:13:11.277 "bdev_name": "Malloc3", 00:13:11.277 "name": "Malloc3", 00:13:11.277 "nguid": "61F832A627084B0391D3293CDF544977", 00:13:11.277 "uuid": "61f832a6-2708-4b03-91d3-293cdf544977" 00:13:11.277 } 00:13:11.277 ] 00:13:11.277 }, 00:13:11.277 { 00:13:11.277 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:11.277 "subtype": "NVMe", 00:13:11.277 "listen_addresses": [ 00:13:11.277 { 00:13:11.277 "trtype": "VFIOUSER", 00:13:11.277 "adrfam": "IPv4", 00:13:11.277 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:11.277 "trsvcid": "0" 00:13:11.277 } 00:13:11.277 ], 00:13:11.277 "allow_any_host": true, 00:13:11.277 "hosts": [], 00:13:11.277 "serial_number": "SPDK2", 00:13:11.277 "model_number": "SPDK bdev Controller", 00:13:11.277 "max_namespaces": 32, 00:13:11.277 "min_cntlid": 1, 00:13:11.277 "max_cntlid": 65519, 00:13:11.277 "namespaces": [ 00:13:11.277 { 00:13:11.277 "nsid": 1, 00:13:11.277 "bdev_name": "Malloc2", 00:13:11.277 "name": "Malloc2", 00:13:11.277 "nguid": "ED12C6415E074027BB9BEC7DE693214D", 00:13:11.277 "uuid": "ed12c641-5e07-4027-bb9b-ec7de693214d" 00:13:11.277 }, 00:13:11.277 { 00:13:11.277 "nsid": 2, 00:13:11.277 "bdev_name": "Malloc4", 00:13:11.277 "name": "Malloc4", 00:13:11.277 "nguid": "8DD4544874A14C02BDF461E47EFEE64F", 00:13:11.277 "uuid": "8dd45448-74a1-4c02-bdf4-61e47efee64f" 00:13:11.277 } 00:13:11.277 ] 00:13:11.277 } 00:13:11.277 ] 00:13:11.277 16:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2201685 00:13:11.277 16:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:11.277 16:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2192323 00:13:11.277 16:04:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2192323 ']' 00:13:11.277 16:04:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2192323 00:13:11.277 16:04:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:11.277 16:04:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:11.277 16:04:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2192323 00:13:11.277 16:04:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:11.277 16:04:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:11.277 16:04:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2192323' 00:13:11.277 killing process with pid 2192323 00:13:11.277 16:04:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2192323 00:13:11.277 16:04:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2192323 00:13:11.539 16:04:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:11.539 16:04:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:11.539 16:04:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:11.539 16:04:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:11.539 16:04:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:11.539 16:04:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2201768 00:13:11.539 16:04:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2201768' 00:13:11.539 Process pid: 2201768 00:13:11.539 16:04:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:11.539 16:04:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:11.539 16:04:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2201768 00:13:11.539 16:04:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2201768 ']' 00:13:11.539 16:04:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.539 16:04:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:11.539 16:04:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.539 16:04:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:11.539 16:04:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:11.539 [2024-07-15 16:04:47.277951] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:11.539 [2024-07-15 16:04:47.278878] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:13:11.539 [2024-07-15 16:04:47.278919] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.539 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.539 [2024-07-15 16:04:47.338467] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.799 [2024-07-15 16:04:47.403178] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.799 [2024-07-15 16:04:47.403218] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.799 [2024-07-15 16:04:47.403226] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.799 [2024-07-15 16:04:47.403232] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.799 [2024-07-15 16:04:47.403238] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.799 [2024-07-15 16:04:47.403308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.799 [2024-07-15 16:04:47.403445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.799 [2024-07-15 16:04:47.403599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.799 [2024-07-15 16:04:47.403601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.799 [2024-07-15 16:04:47.469359] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:11.799 [2024-07-15 16:04:47.469438] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:11.800 [2024-07-15 16:04:47.470362] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:11.800 [2024-07-15 16:04:47.470763] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:11.800 [2024-07-15 16:04:47.470860] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:12.372 16:04:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:12.372 16:04:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:12.372 16:04:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:13.314 16:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:13.575 16:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:13.575 16:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:13.575 16:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:13.575 16:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:13.575 16:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:13.575 Malloc1 00:13:13.575 16:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:13.836 16:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:14.097 16:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:14.097 16:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:14.097 16:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:14.097 16:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:14.358 Malloc2 00:13:14.358 16:04:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:14.619 16:04:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:14.619 16:04:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:14.880 16:04:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:14.880 16:04:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2201768 00:13:14.880 16:04:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2201768 ']' 00:13:14.880 16:04:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2201768 00:13:14.880 16:04:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:14.880 16:04:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:14.880 16:04:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2201768 00:13:14.880 16:04:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:14.880 16:04:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:14.880 16:04:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2201768' 00:13:14.880 killing process with pid 2201768 00:13:14.880 16:04:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2201768 00:13:14.880 16:04:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2201768 00:13:15.141 16:04:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:15.141 16:04:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:15.141 00:13:15.141 real 0m51.312s 00:13:15.141 user 3m23.427s 00:13:15.141 sys 0m2.981s 00:13:15.141 16:04:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:15.141 16:04:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:15.141 ************************************ 00:13:15.141 END TEST nvmf_vfio_user 00:13:15.141 ************************************ 00:13:15.141 16:04:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:15.141 16:04:50 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:15.141 16:04:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:15.141 16:04:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.141 16:04:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:15.141 ************************************ 00:13:15.141 START TEST nvmf_vfio_user_nvme_compliance 00:13:15.141 ************************************ 00:13:15.141 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:15.141 * Looking for test storage... 00:13:15.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:15.141 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.141 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:15.141 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.141 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.141 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.141 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.141 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.141 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.141 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.141 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.141 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.141 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.403 16:04:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.403 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:15.403 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:15.403 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:15.403 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:15.403 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:15.403 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2202519 00:13:15.403 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2202519' 00:13:15.403 Process pid: 2202519 00:13:15.403 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:15.403 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:15.403 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2202519 00:13:15.403 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2202519 ']' 00:13:15.403 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.403 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.403 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.403 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.403 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:15.403 [2024-07-15 16:04:51.055619] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:13:15.403 [2024-07-15 16:04:51.055675] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.403 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.403 [2024-07-15 16:04:51.120977] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:15.403 [2024-07-15 16:04:51.193962] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.403 [2024-07-15 16:04:51.194003] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.403 [2024-07-15 16:04:51.194011] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.403 [2024-07-15 16:04:51.194017] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.403 [2024-07-15 16:04:51.194023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.403 [2024-07-15 16:04:51.194170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.403 [2024-07-15 16:04:51.194231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.403 [2024-07-15 16:04:51.194235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.347 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.347 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:16.347 16:04:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:17.288 malloc0 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.288 16:04:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:17.288 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.288 00:13:17.288 00:13:17.288 CUnit - A unit testing framework for C - Version 2.1-3 00:13:17.288 http://cunit.sourceforge.net/ 00:13:17.288 00:13:17.288 00:13:17.288 Suite: nvme_compliance 00:13:17.288 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 16:04:53.089594] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.288 [2024-07-15 16:04:53.090925] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:17.288 [2024-07-15 16:04:53.090935] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:17.288 [2024-07-15 16:04:53.090939] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:17.288 [2024-07-15 16:04:53.092605] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.550 passed 00:13:17.550 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 16:04:53.188191] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.550 [2024-07-15 16:04:53.191212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.550 passed 00:13:17.550 Test: admin_identify_ns ...[2024-07-15 16:04:53.290406] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.550 [2024-07-15 16:04:53.351139] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:17.550 [2024-07-15 16:04:53.359133] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:17.550 [2024-07-15 16:04:53.380242] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.810 passed 00:13:17.810 Test: admin_get_features_mandatory_features ...[2024-07-15 16:04:53.474668] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.810 [2024-07-15 16:04:53.477680] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.810 passed 00:13:17.810 Test: admin_get_features_optional_features ...[2024-07-15 16:04:53.572240] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.810 [2024-07-15 16:04:53.575252] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.810 passed 00:13:18.071 Test: admin_set_features_number_of_queues ...[2024-07-15 16:04:53.668366] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.071 [2024-07-15 16:04:53.774221] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.071 passed 00:13:18.071 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 16:04:53.868289] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.071 [2024-07-15 16:04:53.871316] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.071 passed 00:13:18.331 Test: admin_get_log_page_with_lpo ...[2024-07-15 16:04:53.961450] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.331 [2024-07-15 16:04:54.029136] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:18.331 [2024-07-15 16:04:54.045227] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.331 passed 00:13:18.331 Test: fabric_property_get ...[2024-07-15 16:04:54.135947] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.331 [2024-07-15 16:04:54.137202] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:18.331 [2024-07-15 16:04:54.138967] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.592 passed 00:13:18.592 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 16:04:54.237526] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.592 [2024-07-15 16:04:54.238788] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:18.592 [2024-07-15 16:04:54.240549] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.592 passed 00:13:18.592 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 16:04:54.333354] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.592 [2024-07-15 16:04:54.417130] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:18.592 [2024-07-15 16:04:54.433131] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:18.854 [2024-07-15 16:04:54.438212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.854 passed 00:13:18.854 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 16:04:54.534654] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.854 [2024-07-15 16:04:54.535894] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:18.854 [2024-07-15 16:04:54.537667] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.854 passed 00:13:18.854 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 16:04:54.630387] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:19.115 [2024-07-15 16:04:54.706132] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:19.115 [2024-07-15 16:04:54.730133] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:19.115 [2024-07-15 16:04:54.735216] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:19.115 passed 00:13:19.115 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 16:04:54.833580] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:19.115 [2024-07-15 16:04:54.834815] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:19.115 [2024-07-15 16:04:54.834835] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:19.115 [2024-07-15 16:04:54.836600] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:19.115 passed 00:13:19.115 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 16:04:54.931384] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:19.376 [2024-07-15 16:04:55.023129] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:19.376 [2024-07-15 16:04:55.031130] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:19.376 [2024-07-15 16:04:55.039129] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:19.376 [2024-07-15 16:04:55.047132] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:19.376 [2024-07-15 16:04:55.076216] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:19.376 passed 00:13:19.376 Test: admin_create_io_sq_verify_pc ...[2024-07-15 16:04:55.169231] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:19.376 [2024-07-15 16:04:55.188139] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:19.376 [2024-07-15 16:04:55.205396] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:19.663 passed 00:13:19.663 Test: admin_create_io_qp_max_qps ...[2024-07-15 16:04:55.299965] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.607 [2024-07-15 16:04:56.386134] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:21.178 [2024-07-15 16:04:56.777922] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.178 passed 00:13:21.178 Test: admin_create_io_sq_shared_cq ...[2024-07-15 16:04:56.876090] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.178 [2024-07-15 16:04:57.009138] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:21.438 [2024-07-15 16:04:57.046192] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.438 passed 00:13:21.438 00:13:21.438 Run Summary: Type Total Ran Passed Failed Inactive 00:13:21.438 suites 1 1 n/a 0 0 00:13:21.438 tests 18 18 18 0 0 00:13:21.438 asserts 360 360 360 0 n/a 00:13:21.438 00:13:21.438 Elapsed time = 1.660 seconds 00:13:21.438 16:04:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2202519 00:13:21.438 16:04:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2202519 ']' 00:13:21.438 16:04:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2202519 00:13:21.438 16:04:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:21.438 16:04:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:21.438 16:04:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2202519 00:13:21.438 16:04:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:21.438 16:04:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:21.438 16:04:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2202519' 00:13:21.438 killing process with pid 2202519 00:13:21.438 16:04:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2202519 00:13:21.438 16:04:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2202519 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:21.700 00:13:21.700 real 0m6.419s 00:13:21.700 user 0m18.414s 00:13:21.700 sys 0m0.452s 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:21.700 ************************************ 00:13:21.700 END TEST nvmf_vfio_user_nvme_compliance 00:13:21.700 ************************************ 00:13:21.700 16:04:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:21.700 16:04:57 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:21.700 16:04:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:21.700 16:04:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:21.700 16:04:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:21.700 ************************************ 00:13:21.700 START TEST nvmf_vfio_user_fuzz 00:13:21.700 ************************************ 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:21.700 * Looking for test storage... 00:13:21.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:21.700 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:21.701 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:21.701 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:21.701 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:21.701 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2203915 00:13:21.701 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2203915' 00:13:21.701 Process pid: 2203915 00:13:21.701 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:21.701 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:21.701 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2203915 00:13:21.701 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2203915 ']' 00:13:21.701 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.701 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.701 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.701 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.701 16:04:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:22.643 16:04:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.643 16:04:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:22.643 16:04:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:23.586 malloc0 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:23.586 16:04:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:55.697 Fuzzing completed. Shutting down the fuzz application 00:13:55.697 00:13:55.697 Dumping successful admin opcodes: 00:13:55.697 8, 9, 10, 24, 00:13:55.697 Dumping successful io opcodes: 00:13:55.697 0, 00:13:55.697 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1123659, total successful commands: 4423, random_seed: 309212928 00:13:55.697 NS: 0x200003a1ef00 admin qp, Total commands completed: 141388, total successful commands: 1147, random_seed: 1960701824 00:13:55.697 16:05:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:55.697 16:05:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.697 16:05:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:55.697 16:05:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.697 16:05:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2203915 00:13:55.697 16:05:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2203915 ']' 00:13:55.697 16:05:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2203915 00:13:55.697 16:05:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:55.697 16:05:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:55.697 16:05:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2203915 00:13:55.697 16:05:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:55.697 16:05:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:55.697 16:05:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2203915' 00:13:55.697 killing process with pid 2203915 00:13:55.697 16:05:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2203915 00:13:55.697 16:05:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2203915 00:13:55.697 16:05:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:55.697 16:05:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:55.697 00:13:55.697 real 0m33.685s 00:13:55.697 user 0m38.194s 00:13:55.697 sys 0m25.277s 00:13:55.697 16:05:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:55.697 16:05:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:55.697 ************************************ 00:13:55.697 END TEST nvmf_vfio_user_fuzz 00:13:55.697 ************************************ 00:13:55.697 16:05:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:55.697 16:05:31 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:55.697 16:05:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:55.697 16:05:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.697 16:05:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:55.697 ************************************ 00:13:55.697 START TEST nvmf_host_management 00:13:55.697 ************************************ 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:55.697 * Looking for test storage... 00:13:55.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.697 16:05:31 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:55.698 16:05:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.295 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:02.296 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:02.296 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:02.296 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:02.296 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.296 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.557 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.557 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.557 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.557 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.557 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.557 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.557 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:14:02.558 00:14:02.558 --- 10.0.0.2 ping statistics --- 00:14:02.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.558 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:14:02.558 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.400 ms 00:14:02.558 00:14:02.558 --- 10.0.0.1 ping statistics --- 00:14:02.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.558 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:14:02.558 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.819 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:02.819 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.819 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.819 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.819 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.819 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.819 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.820 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.820 16:05:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:02.820 16:05:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:02.820 16:05:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:02.820 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.820 16:05:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:02.820 16:05:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:02.820 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2214215 00:14:02.820 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2214215 00:14:02.820 16:05:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:02.820 16:05:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2214215 ']' 00:14:02.820 16:05:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.820 16:05:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.820 16:05:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.820 16:05:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.820 16:05:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:02.820 [2024-07-15 16:05:38.504874] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:14:02.820 [2024-07-15 16:05:38.504957] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.820 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.820 [2024-07-15 16:05:38.571959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.820 [2024-07-15 16:05:38.642051] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.820 [2024-07-15 16:05:38.642097] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.820 [2024-07-15 16:05:38.642103] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.820 [2024-07-15 16:05:38.642108] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.820 [2024-07-15 16:05:38.642112] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.820 [2024-07-15 16:05:38.642254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.820 [2024-07-15 16:05:38.642393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.820 [2024-07-15 16:05:38.642556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.820 [2024-07-15 16:05:38.642557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:03.764 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.764 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:03.764 16:05:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:03.764 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:03.764 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.765 [2024-07-15 16:05:39.322830] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.765 Malloc0 00:14:03.765 [2024-07-15 16:05:39.386144] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2214536 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2214536 /var/tmp/bdevperf.sock 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2214536 ']' 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:03.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:03.765 { 00:14:03.765 "params": { 00:14:03.765 "name": "Nvme$subsystem", 00:14:03.765 "trtype": "$TEST_TRANSPORT", 00:14:03.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:03.765 "adrfam": "ipv4", 00:14:03.765 "trsvcid": "$NVMF_PORT", 00:14:03.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:03.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:03.765 "hdgst": ${hdgst:-false}, 00:14:03.765 "ddgst": ${ddgst:-false} 00:14:03.765 }, 00:14:03.765 "method": "bdev_nvme_attach_controller" 00:14:03.765 } 00:14:03.765 EOF 00:14:03.765 )") 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:03.765 16:05:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:03.765 "params": { 00:14:03.765 "name": "Nvme0", 00:14:03.765 "trtype": "tcp", 00:14:03.765 "traddr": "10.0.0.2", 00:14:03.765 "adrfam": "ipv4", 00:14:03.765 "trsvcid": "4420", 00:14:03.765 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:03.765 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:03.765 "hdgst": false, 00:14:03.765 "ddgst": false 00:14:03.765 }, 00:14:03.765 "method": "bdev_nvme_attach_controller" 00:14:03.765 }' 00:14:03.765 [2024-07-15 16:05:39.486153] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:14:03.765 [2024-07-15 16:05:39.486205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2214536 ] 00:14:03.765 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.765 [2024-07-15 16:05:39.544947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.027 [2024-07-15 16:05:39.609543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.027 Running I/O for 10 seconds... 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.603 16:05:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:04.603 [2024-07-15 16:05:40.333119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.603 [2024-07-15 16:05:40.333170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.603 [2024-07-15 16:05:40.333177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.603 [2024-07-15 16:05:40.333184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.603 [2024-07-15 16:05:40.333190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3e40 is same with the state(5) to be set 00:14:04.604 [2024-07-15 16:05:40.333962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.333998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.604 [2024-07-15 16:05:40.334324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.604 [2024-07-15 16:05:40.334332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.334985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.334996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.335004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.335014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.335022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.335032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.335041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.335051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.335059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.335069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.335078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.335088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.335096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.605 [2024-07-15 16:05:40.335106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.605 [2024-07-15 16:05:40.335115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.606 [2024-07-15 16:05:40.335128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.606 [2024-07-15 16:05:40.335137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.606 [2024-07-15 16:05:40.335148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.606 [2024-07-15 16:05:40.335156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.606 [2024-07-15 16:05:40.335166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.606 [2024-07-15 16:05:40.335175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.606 [2024-07-15 16:05:40.335227] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x234f4f0 was disconnected and freed. reset controller. 00:14:04.606 [2024-07-15 16:05:40.336402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:04.606 16:05:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.606 task offset: 74112 on job bdev=Nvme0n1 fails 00:14:04.606 00:14:04.606 Latency(us) 00:14:04.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.606 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:04.606 Job: Nvme0n1 ended in about 0.54 seconds with error 00:14:04.606 Verification LBA range: start 0x0 length 0x400 00:14:04.606 Nvme0n1 : 0.54 1061.08 66.32 117.90 0.00 53022.04 1706.67 47841.28 00:14:04.606 =================================================================================================================== 00:14:04.606 Total : 1061.08 66.32 117.90 0.00 53022.04 1706.67 47841.28 00:14:04.606 [2024-07-15 16:05:40.338406] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:04.606 [2024-07-15 16:05:40.338429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3e3b0 (9): Bad file descriptor 00:14:04.606 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:04.606 16:05:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.606 16:05:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:04.606 16:05:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.606 16:05:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:04.606 [2024-07-15 16:05:40.400908] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:05.548 16:05:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2214536 00:14:05.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2214536) - No such process 00:14:05.548 16:05:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:05.548 16:05:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:05.548 16:05:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:05.548 16:05:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:05.548 16:05:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:05.548 16:05:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:05.548 16:05:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:05.548 16:05:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:05.548 { 00:14:05.548 "params": { 00:14:05.548 "name": "Nvme$subsystem", 00:14:05.548 "trtype": "$TEST_TRANSPORT", 00:14:05.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:05.548 "adrfam": "ipv4", 00:14:05.548 "trsvcid": "$NVMF_PORT", 00:14:05.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:05.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:05.548 "hdgst": ${hdgst:-false}, 00:14:05.548 "ddgst": ${ddgst:-false} 00:14:05.548 }, 00:14:05.548 "method": "bdev_nvme_attach_controller" 00:14:05.548 } 00:14:05.548 EOF 00:14:05.548 )") 00:14:05.548 16:05:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:05.548 16:05:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:05.548 16:05:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:05.548 16:05:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:05.548 "params": { 00:14:05.548 "name": "Nvme0", 00:14:05.548 "trtype": "tcp", 00:14:05.548 "traddr": "10.0.0.2", 00:14:05.548 "adrfam": "ipv4", 00:14:05.548 "trsvcid": "4420", 00:14:05.548 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:05.548 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:05.548 "hdgst": false, 00:14:05.548 "ddgst": false 00:14:05.548 }, 00:14:05.548 "method": "bdev_nvme_attach_controller" 00:14:05.548 }' 00:14:05.809 [2024-07-15 16:05:41.409212] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:14:05.809 [2024-07-15 16:05:41.409271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2214938 ] 00:14:05.809 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.809 [2024-07-15 16:05:41.468153] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.809 [2024-07-15 16:05:41.531571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.069 Running I/O for 1 seconds... 00:14:07.056 00:14:07.056 Latency(us) 00:14:07.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.056 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:07.056 Verification LBA range: start 0x0 length 0x400 00:14:07.056 Nvme0n1 : 1.03 1236.91 77.31 0.00 0.00 50820.07 9557.33 41069.23 00:14:07.056 =================================================================================================================== 00:14:07.056 Total : 1236.91 77.31 0.00 0.00 50820.07 9557.33 41069.23 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:07.317 rmmod nvme_tcp 00:14:07.317 rmmod nvme_fabrics 00:14:07.317 rmmod nvme_keyring 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2214215 ']' 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2214215 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2214215 ']' 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2214215 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2214215 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2214215' 00:14:07.317 killing process with pid 2214215 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2214215 00:14:07.317 16:05:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2214215 00:14:07.577 [2024-07-15 16:05:43.251788] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:07.577 16:05:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.577 16:05:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:07.577 16:05:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:07.577 16:05:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.577 16:05:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.577 16:05:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.577 16:05:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.577 16:05:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.119 16:05:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:10.119 16:05:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:10.119 00:14:10.119 real 0m14.210s 00:14:10.119 user 0m22.956s 00:14:10.119 sys 0m6.300s 00:14:10.119 16:05:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:10.119 16:05:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:10.119 ************************************ 00:14:10.119 END TEST nvmf_host_management 00:14:10.119 ************************************ 00:14:10.119 16:05:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:10.119 16:05:45 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:10.119 16:05:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:10.119 16:05:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.119 16:05:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:10.119 ************************************ 00:14:10.119 START TEST nvmf_lvol 00:14:10.120 ************************************ 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:10.120 * Looking for test storage... 00:14:10.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:10.120 16:05:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:16.708 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.708 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:16.708 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:16.708 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:16.708 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:16.708 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:16.708 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:16.708 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:16.708 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:16.708 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:16.708 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:16.708 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:16.709 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:16.709 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:16.709 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:16.709 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:16.709 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:16.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:14:16.970 00:14:16.970 --- 10.0.0.2 ping statistics --- 00:14:16.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.970 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:14:16.970 00:14:16.970 --- 10.0.0.1 ping statistics --- 00:14:16.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.970 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2219367 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2219367 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2219367 ']' 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.970 16:05:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:16.970 [2024-07-15 16:05:52.763047] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:14:16.970 [2024-07-15 16:05:52.763102] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.970 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.230 [2024-07-15 16:05:52.832797] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:17.230 [2024-07-15 16:05:52.903895] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.230 [2024-07-15 16:05:52.903936] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.230 [2024-07-15 16:05:52.903944] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.230 [2024-07-15 16:05:52.903950] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.230 [2024-07-15 16:05:52.903956] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.230 [2024-07-15 16:05:52.904024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.230 [2024-07-15 16:05:52.904159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.230 [2024-07-15 16:05:52.904161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.800 16:05:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.800 16:05:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:17.800 16:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.800 16:05:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:17.800 16:05:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:17.800 16:05:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.800 16:05:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:18.059 [2024-07-15 16:05:53.720497] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.059 16:05:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:18.319 16:05:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:18.319 16:05:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:18.319 16:05:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:18.319 16:05:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:18.581 16:05:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:18.842 16:05:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a4a5b4aa-13c5-4960-a930-e7b8e9ebfe3d 00:14:18.842 16:05:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a4a5b4aa-13c5-4960-a930-e7b8e9ebfe3d lvol 20 00:14:18.842 16:05:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f4372912-f594-4bfe-9ed5-10e4bc00a71b 00:14:18.842 16:05:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:19.102 16:05:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f4372912-f594-4bfe-9ed5-10e4bc00a71b 00:14:19.362 16:05:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:19.362 [2024-07-15 16:05:55.106693] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.362 16:05:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:19.622 16:05:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2219980 00:14:19.622 16:05:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:19.622 16:05:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:19.622 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.562 16:05:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f4372912-f594-4bfe-9ed5-10e4bc00a71b MY_SNAPSHOT 00:14:20.821 16:05:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d0684c1b-5722-4c3c-bda2-0b3b2409f4d6 00:14:20.821 16:05:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f4372912-f594-4bfe-9ed5-10e4bc00a71b 30 00:14:21.081 16:05:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d0684c1b-5722-4c3c-bda2-0b3b2409f4d6 MY_CLONE 00:14:21.081 16:05:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=129578fe-d17f-4167-bcf0-acccadd588e0 00:14:21.081 16:05:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 129578fe-d17f-4167-bcf0-acccadd588e0 00:14:21.651 16:05:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2219980 00:14:31.646 Initializing NVMe Controllers 00:14:31.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:31.646 Controller IO queue size 128, less than required. 00:14:31.646 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:31.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:31.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:31.647 Initialization complete. Launching workers. 00:14:31.647 ======================================================== 00:14:31.647 Latency(us) 00:14:31.647 Device Information : IOPS MiB/s Average min max 00:14:31.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12459.90 48.67 10278.02 1564.98 62063.41 00:14:31.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17727.20 69.25 7221.89 1082.02 48917.53 00:14:31.647 ======================================================== 00:14:31.647 Total : 30187.09 117.92 8483.32 1082.02 62063.41 00:14:31.647 00:14:31.647 16:06:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:31.647 16:06:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f4372912-f594-4bfe-9ed5-10e4bc00a71b 00:14:31.647 16:06:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4a5b4aa-13c5-4960-a930-e7b8e9ebfe3d 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:31.647 rmmod nvme_tcp 00:14:31.647 rmmod nvme_fabrics 00:14:31.647 rmmod nvme_keyring 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2219367 ']' 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2219367 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2219367 ']' 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2219367 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2219367 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2219367' 00:14:31.647 killing process with pid 2219367 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2219367 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2219367 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.647 16:06:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:33.097 00:14:33.097 real 0m23.072s 00:14:33.097 user 1m3.613s 00:14:33.097 sys 0m7.740s 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:33.097 ************************************ 00:14:33.097 END TEST nvmf_lvol 00:14:33.097 ************************************ 00:14:33.097 16:06:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:33.097 16:06:08 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:33.097 16:06:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:33.097 16:06:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:33.097 16:06:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:33.097 ************************************ 00:14:33.097 START TEST nvmf_lvs_grow 00:14:33.097 ************************************ 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:33.097 * Looking for test storage... 00:14:33.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:33.097 16:06:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:39.688 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:39.688 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:39.688 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:39.688 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:39.688 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:39.949 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:39.949 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:39.949 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:39.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:39.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:14:39.949 00:14:39.949 --- 10.0.0.2 ping statistics --- 00:14:39.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.949 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:14:39.949 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:39.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.479 ms 00:14:39.949 00:14:39.949 --- 10.0.0.1 ping statistics --- 00:14:39.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.950 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2226888 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2226888 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2226888 ']' 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.950 16:06:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:39.950 [2024-07-15 16:06:15.748395] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:14:39.950 [2024-07-15 16:06:15.748446] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.950 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.210 [2024-07-15 16:06:15.812546] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.210 [2024-07-15 16:06:15.876158] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.210 [2024-07-15 16:06:15.876193] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.210 [2024-07-15 16:06:15.876200] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.210 [2024-07-15 16:06:15.876207] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.210 [2024-07-15 16:06:15.876212] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.210 [2024-07-15 16:06:15.876232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.782 16:06:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.782 16:06:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:40.782 16:06:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:40.782 16:06:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:40.782 16:06:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:40.782 16:06:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.782 16:06:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:41.044 [2024-07-15 16:06:16.694900] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.044 16:06:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:41.044 16:06:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:41.044 16:06:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.044 16:06:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:41.044 ************************************ 00:14:41.044 START TEST lvs_grow_clean 00:14:41.044 ************************************ 00:14:41.044 16:06:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:41.044 16:06:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:41.044 16:06:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:41.044 16:06:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:41.044 16:06:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:41.044 16:06:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:41.044 16:06:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:41.044 16:06:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:41.044 16:06:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:41.044 16:06:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:41.306 16:06:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:41.306 16:06:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:41.306 16:06:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6e5c46bd-5f4b-44c1-95e0-d2d05386812a 00:14:41.306 16:06:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e5c46bd-5f4b-44c1-95e0-d2d05386812a 00:14:41.306 16:06:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:41.567 16:06:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:41.567 16:06:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:41.567 16:06:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6e5c46bd-5f4b-44c1-95e0-d2d05386812a lvol 150 00:14:41.828 16:06:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0491e444-dfe1-4132-9aef-814735b2884e 00:14:41.828 16:06:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:41.828 16:06:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:41.828 [2024-07-15 16:06:17.578708] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:41.828 [2024-07-15 16:06:17.578759] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:41.828 true 00:14:41.828 16:06:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e5c46bd-5f4b-44c1-95e0-d2d05386812a 00:14:41.828 16:06:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:42.089 16:06:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:42.089 16:06:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:42.089 16:06:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0491e444-dfe1-4132-9aef-814735b2884e 00:14:42.350 16:06:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:42.350 [2024-07-15 16:06:18.188702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.611 16:06:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:42.611 16:06:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2227299 00:14:42.611 16:06:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:42.611 16:06:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2227299 /var/tmp/bdevperf.sock 00:14:42.611 16:06:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2227299 ']' 00:14:42.611 16:06:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:42.611 16:06:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:42.611 16:06:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:42.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:42.611 16:06:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:42.611 16:06:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:42.611 16:06:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:42.611 [2024-07-15 16:06:18.397493] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:14:42.611 [2024-07-15 16:06:18.397536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227299 ] 00:14:42.611 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.871 [2024-07-15 16:06:18.464204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.871 [2024-07-15 16:06:18.528138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.440 16:06:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.440 16:06:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:43.440 16:06:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:43.700 Nvme0n1 00:14:43.700 16:06:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:43.700 [ 00:14:43.700 { 00:14:43.700 "name": "Nvme0n1", 00:14:43.700 "aliases": [ 00:14:43.700 "0491e444-dfe1-4132-9aef-814735b2884e" 00:14:43.700 ], 00:14:43.700 "product_name": "NVMe disk", 00:14:43.700 "block_size": 4096, 00:14:43.700 "num_blocks": 38912, 00:14:43.700 "uuid": "0491e444-dfe1-4132-9aef-814735b2884e", 00:14:43.700 "assigned_rate_limits": { 00:14:43.700 "rw_ios_per_sec": 0, 00:14:43.700 "rw_mbytes_per_sec": 0, 00:14:43.700 "r_mbytes_per_sec": 0, 00:14:43.700 "w_mbytes_per_sec": 0 00:14:43.700 }, 00:14:43.700 "claimed": false, 00:14:43.700 "zoned": false, 00:14:43.700 "supported_io_types": { 00:14:43.700 "read": true, 00:14:43.700 "write": true, 00:14:43.700 "unmap": true, 00:14:43.700 "flush": true, 00:14:43.700 "reset": true, 00:14:43.700 "nvme_admin": true, 00:14:43.700 "nvme_io": true, 00:14:43.700 "nvme_io_md": false, 00:14:43.700 "write_zeroes": true, 00:14:43.700 "zcopy": false, 00:14:43.700 "get_zone_info": false, 00:14:43.700 "zone_management": false, 00:14:43.700 "zone_append": false, 00:14:43.700 "compare": true, 00:14:43.700 "compare_and_write": true, 00:14:43.700 "abort": true, 00:14:43.700 "seek_hole": false, 00:14:43.700 "seek_data": false, 00:14:43.700 "copy": true, 00:14:43.700 "nvme_iov_md": false 00:14:43.700 }, 00:14:43.700 "memory_domains": [ 00:14:43.700 { 00:14:43.700 "dma_device_id": "system", 00:14:43.700 "dma_device_type": 1 00:14:43.700 } 00:14:43.700 ], 00:14:43.700 "driver_specific": { 00:14:43.700 "nvme": [ 00:14:43.700 { 00:14:43.700 "trid": { 00:14:43.700 "trtype": "TCP", 00:14:43.700 "adrfam": "IPv4", 00:14:43.700 "traddr": "10.0.0.2", 00:14:43.700 "trsvcid": "4420", 00:14:43.700 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:43.700 }, 00:14:43.700 "ctrlr_data": { 00:14:43.700 "cntlid": 1, 00:14:43.700 "vendor_id": "0x8086", 00:14:43.700 "model_number": "SPDK bdev Controller", 00:14:43.700 "serial_number": "SPDK0", 00:14:43.700 "firmware_revision": "24.09", 00:14:43.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:43.700 "oacs": { 00:14:43.700 "security": 0, 00:14:43.700 "format": 0, 00:14:43.700 "firmware": 0, 00:14:43.700 "ns_manage": 0 00:14:43.700 }, 00:14:43.700 "multi_ctrlr": true, 00:14:43.700 "ana_reporting": false 00:14:43.700 }, 00:14:43.700 "vs": { 00:14:43.700 "nvme_version": "1.3" 00:14:43.700 }, 00:14:43.700 "ns_data": { 00:14:43.700 "id": 1, 00:14:43.700 "can_share": true 00:14:43.700 } 00:14:43.700 } 00:14:43.700 ], 00:14:43.700 "mp_policy": "active_passive" 00:14:43.700 } 00:14:43.700 } 00:14:43.700 ] 00:14:43.960 16:06:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2227613 00:14:43.960 16:06:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:43.960 16:06:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:43.960 Running I/O for 10 seconds... 00:14:44.899 Latency(us) 00:14:44.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.899 Nvme0n1 : 1.00 17564.00 68.61 0.00 0.00 0.00 0.00 0.00 00:14:44.899 =================================================================================================================== 00:14:44.899 Total : 17564.00 68.61 0.00 0.00 0.00 0.00 0.00 00:14:44.899 00:14:45.840 16:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6e5c46bd-5f4b-44c1-95e0-d2d05386812a 00:14:45.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.840 Nvme0n1 : 2.00 17670.00 69.02 0.00 0.00 0.00 0.00 0.00 00:14:45.840 =================================================================================================================== 00:14:45.840 Total : 17670.00 69.02 0.00 0.00 0.00 0.00 0.00 00:14:45.840 00:14:46.100 true 00:14:46.100 16:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e5c46bd-5f4b-44c1-95e0-d2d05386812a 00:14:46.100 16:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:46.100 16:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:46.100 16:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:46.100 16:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2227613 00:14:47.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.043 Nvme0n1 : 3.00 17702.67 69.15 0.00 0.00 0.00 0.00 0.00 00:14:47.043 =================================================================================================================== 00:14:47.043 Total : 17702.67 69.15 0.00 0.00 0.00 0.00 0.00 00:14:47.043 00:14:47.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.983 Nvme0n1 : 4.00 17737.00 69.29 0.00 0.00 0.00 0.00 0.00 00:14:47.983 =================================================================================================================== 00:14:47.983 Total : 17737.00 69.29 0.00 0.00 0.00 0.00 0.00 00:14:47.983 00:14:48.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.925 Nvme0n1 : 5.00 17760.80 69.38 0.00 0.00 0.00 0.00 0.00 00:14:48.925 =================================================================================================================== 00:14:48.925 Total : 17760.80 69.38 0.00 0.00 0.00 0.00 0.00 00:14:48.925 00:14:49.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.867 Nvme0n1 : 6.00 17779.33 69.45 0.00 0.00 0.00 0.00 0.00 00:14:49.867 =================================================================================================================== 00:14:49.867 Total : 17779.33 69.45 0.00 0.00 0.00 0.00 0.00 00:14:49.867 00:14:50.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.810 Nvme0n1 : 7.00 17794.86 69.51 0.00 0.00 0.00 0.00 0.00 00:14:50.810 =================================================================================================================== 00:14:50.810 Total : 17794.86 69.51 0.00 0.00 0.00 0.00 0.00 00:14:50.810 00:14:52.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.261 Nvme0n1 : 8.00 17806.50 69.56 0.00 0.00 0.00 0.00 0.00 00:14:52.261 =================================================================================================================== 00:14:52.262 Total : 17806.50 69.56 0.00 0.00 0.00 0.00 0.00 00:14:52.262 00:14:52.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.862 Nvme0n1 : 9.00 17818.22 69.60 0.00 0.00 0.00 0.00 0.00 00:14:52.862 =================================================================================================================== 00:14:52.862 Total : 17818.22 69.60 0.00 0.00 0.00 0.00 0.00 00:14:52.862 00:14:54.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.247 Nvme0n1 : 10.00 17828.40 69.64 0.00 0.00 0.00 0.00 0.00 00:14:54.247 =================================================================================================================== 00:14:54.247 Total : 17828.40 69.64 0.00 0.00 0.00 0.00 0.00 00:14:54.247 00:14:54.247 00:14:54.247 Latency(us) 00:14:54.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.247 Nvme0n1 : 10.01 17827.88 69.64 0.00 0.00 7174.72 4505.60 11905.71 00:14:54.247 =================================================================================================================== 00:14:54.247 Total : 17827.88 69.64 0.00 0.00 7174.72 4505.60 11905.71 00:14:54.247 0 00:14:54.247 16:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2227299 00:14:54.247 16:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2227299 ']' 00:14:54.247 16:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2227299 00:14:54.247 16:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:54.247 16:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.247 16:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2227299 00:14:54.247 16:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:54.247 16:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:54.247 16:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2227299' 00:14:54.247 killing process with pid 2227299 00:14:54.247 16:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2227299 00:14:54.247 Received shutdown signal, test time was about 10.000000 seconds 00:14:54.247 00:14:54.247 Latency(us) 00:14:54.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.247 =================================================================================================================== 00:14:54.247 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:54.247 16:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2227299 00:14:54.247 16:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:54.247 16:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:54.508 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e5c46bd-5f4b-44c1-95e0-d2d05386812a 00:14:54.508 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:54.508 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:54.508 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:54.508 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:54.769 [2024-07-15 16:06:30.443543] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:54.769 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e5c46bd-5f4b-44c1-95e0-d2d05386812a 00:14:54.769 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:54.769 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e5c46bd-5f4b-44c1-95e0-d2d05386812a 00:14:54.769 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.769 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.769 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.769 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.769 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.769 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.769 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.769 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:54.769 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e5c46bd-5f4b-44c1-95e0-d2d05386812a 00:14:55.029 request: 00:14:55.029 { 00:14:55.029 "uuid": "6e5c46bd-5f4b-44c1-95e0-d2d05386812a", 00:14:55.029 "method": "bdev_lvol_get_lvstores", 00:14:55.029 "req_id": 1 00:14:55.029 } 00:14:55.029 Got JSON-RPC error response 00:14:55.029 response: 00:14:55.029 { 00:14:55.029 "code": -19, 00:14:55.029 "message": "No such device" 00:14:55.029 } 00:14:55.029 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:55.029 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:55.029 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:55.029 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:55.029 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:55.029 aio_bdev 00:14:55.029 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0491e444-dfe1-4132-9aef-814735b2884e 00:14:55.029 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=0491e444-dfe1-4132-9aef-814735b2884e 00:14:55.029 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:55.029 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:55.029 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:55.029 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:55.029 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:55.289 16:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0491e444-dfe1-4132-9aef-814735b2884e -t 2000 00:14:55.289 [ 00:14:55.289 { 00:14:55.289 "name": "0491e444-dfe1-4132-9aef-814735b2884e", 00:14:55.289 "aliases": [ 00:14:55.289 "lvs/lvol" 00:14:55.289 ], 00:14:55.289 "product_name": "Logical Volume", 00:14:55.289 "block_size": 4096, 00:14:55.289 "num_blocks": 38912, 00:14:55.289 "uuid": "0491e444-dfe1-4132-9aef-814735b2884e", 00:14:55.290 "assigned_rate_limits": { 00:14:55.290 "rw_ios_per_sec": 0, 00:14:55.290 "rw_mbytes_per_sec": 0, 00:14:55.290 "r_mbytes_per_sec": 0, 00:14:55.290 "w_mbytes_per_sec": 0 00:14:55.290 }, 00:14:55.290 "claimed": false, 00:14:55.290 "zoned": false, 00:14:55.290 "supported_io_types": { 00:14:55.290 "read": true, 00:14:55.290 "write": true, 00:14:55.290 "unmap": true, 00:14:55.290 "flush": false, 00:14:55.290 "reset": true, 00:14:55.290 "nvme_admin": false, 00:14:55.290 "nvme_io": false, 00:14:55.290 "nvme_io_md": false, 00:14:55.290 "write_zeroes": true, 00:14:55.290 "zcopy": false, 00:14:55.290 "get_zone_info": false, 00:14:55.290 "zone_management": false, 00:14:55.290 "zone_append": false, 00:14:55.290 "compare": false, 00:14:55.290 "compare_and_write": false, 00:14:55.290 "abort": false, 00:14:55.290 "seek_hole": true, 00:14:55.290 "seek_data": true, 00:14:55.290 "copy": false, 00:14:55.290 "nvme_iov_md": false 00:14:55.290 }, 00:14:55.290 "driver_specific": { 00:14:55.290 "lvol": { 00:14:55.290 "lvol_store_uuid": "6e5c46bd-5f4b-44c1-95e0-d2d05386812a", 00:14:55.290 "base_bdev": "aio_bdev", 00:14:55.290 "thin_provision": false, 00:14:55.290 "num_allocated_clusters": 38, 00:14:55.290 "snapshot": false, 00:14:55.290 "clone": false, 00:14:55.290 "esnap_clone": false 00:14:55.290 } 00:14:55.290 } 00:14:55.290 } 00:14:55.290 ] 00:14:55.290 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:55.290 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e5c46bd-5f4b-44c1-95e0-d2d05386812a 00:14:55.290 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:55.550 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:55.550 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e5c46bd-5f4b-44c1-95e0-d2d05386812a 00:14:55.550 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:55.812 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:55.812 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0491e444-dfe1-4132-9aef-814735b2884e 00:14:55.812 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6e5c46bd-5f4b-44c1-95e0-d2d05386812a 00:14:56.073 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:56.073 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:56.073 00:14:56.073 real 0m15.130s 00:14:56.073 user 0m14.619s 00:14:56.073 sys 0m1.506s 00:14:56.073 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:56.073 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:56.073 ************************************ 00:14:56.073 END TEST lvs_grow_clean 00:14:56.073 ************************************ 00:14:56.333 16:06:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:56.333 16:06:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:56.333 16:06:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:56.333 16:06:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.333 16:06:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:56.333 ************************************ 00:14:56.333 START TEST lvs_grow_dirty 00:14:56.333 ************************************ 00:14:56.333 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:56.333 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:56.333 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:56.333 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:56.333 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:56.333 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:56.333 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:56.333 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:56.333 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:56.333 16:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:56.594 16:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:56.594 16:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:56.594 16:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a65b00ec-ff79-4a04-a649-5cf2a062196e 00:14:56.594 16:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a65b00ec-ff79-4a04-a649-5cf2a062196e 00:14:56.594 16:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:56.853 16:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:56.853 16:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:56.853 16:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a65b00ec-ff79-4a04-a649-5cf2a062196e lvol 150 00:14:56.853 16:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=56d7ff48-2c72-4d80-8e80-4389f627eb91 00:14:56.853 16:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:56.853 16:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:57.114 [2024-07-15 16:06:32.780129] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:57.114 [2024-07-15 16:06:32.780177] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:57.114 true 00:14:57.114 16:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a65b00ec-ff79-4a04-a649-5cf2a062196e 00:14:57.114 16:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:57.114 16:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:57.114 16:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:57.375 16:06:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 56d7ff48-2c72-4d80-8e80-4389f627eb91 00:14:57.635 16:06:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:57.635 [2024-07-15 16:06:33.377938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.635 16:06:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:57.896 16:06:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2230362 00:14:57.896 16:06:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:57.896 16:06:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2230362 /var/tmp/bdevperf.sock 00:14:57.896 16:06:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:57.896 16:06:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2230362 ']' 00:14:57.896 16:06:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:57.896 16:06:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.896 16:06:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:57.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:57.896 16:06:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.896 16:06:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:57.896 [2024-07-15 16:06:33.592459] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:14:57.896 [2024-07-15 16:06:33.592508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230362 ] 00:14:57.896 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.896 [2024-07-15 16:06:33.666121] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.896 [2024-07-15 16:06:33.720766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.838 16:06:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.838 16:06:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:58.838 16:06:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:58.838 Nvme0n1 00:14:58.838 16:06:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:59.100 [ 00:14:59.100 { 00:14:59.100 "name": "Nvme0n1", 00:14:59.100 "aliases": [ 00:14:59.100 "56d7ff48-2c72-4d80-8e80-4389f627eb91" 00:14:59.100 ], 00:14:59.100 "product_name": "NVMe disk", 00:14:59.100 "block_size": 4096, 00:14:59.100 "num_blocks": 38912, 00:14:59.100 "uuid": "56d7ff48-2c72-4d80-8e80-4389f627eb91", 00:14:59.100 "assigned_rate_limits": { 00:14:59.100 "rw_ios_per_sec": 0, 00:14:59.100 "rw_mbytes_per_sec": 0, 00:14:59.100 "r_mbytes_per_sec": 0, 00:14:59.100 "w_mbytes_per_sec": 0 00:14:59.100 }, 00:14:59.100 "claimed": false, 00:14:59.100 "zoned": false, 00:14:59.100 "supported_io_types": { 00:14:59.100 "read": true, 00:14:59.100 "write": true, 00:14:59.100 "unmap": true, 00:14:59.100 "flush": true, 00:14:59.100 "reset": true, 00:14:59.100 "nvme_admin": true, 00:14:59.100 "nvme_io": true, 00:14:59.100 "nvme_io_md": false, 00:14:59.100 "write_zeroes": true, 00:14:59.100 "zcopy": false, 00:14:59.100 "get_zone_info": false, 00:14:59.100 "zone_management": false, 00:14:59.100 "zone_append": false, 00:14:59.100 "compare": true, 00:14:59.100 "compare_and_write": true, 00:14:59.100 "abort": true, 00:14:59.100 "seek_hole": false, 00:14:59.100 "seek_data": false, 00:14:59.100 "copy": true, 00:14:59.100 "nvme_iov_md": false 00:14:59.100 }, 00:14:59.100 "memory_domains": [ 00:14:59.100 { 00:14:59.100 "dma_device_id": "system", 00:14:59.100 "dma_device_type": 1 00:14:59.100 } 00:14:59.100 ], 00:14:59.100 "driver_specific": { 00:14:59.100 "nvme": [ 00:14:59.100 { 00:14:59.100 "trid": { 00:14:59.100 "trtype": "TCP", 00:14:59.100 "adrfam": "IPv4", 00:14:59.100 "traddr": "10.0.0.2", 00:14:59.100 "trsvcid": "4420", 00:14:59.100 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:59.100 }, 00:14:59.100 "ctrlr_data": { 00:14:59.100 "cntlid": 1, 00:14:59.100 "vendor_id": "0x8086", 00:14:59.100 "model_number": "SPDK bdev Controller", 00:14:59.100 "serial_number": "SPDK0", 00:14:59.100 "firmware_revision": "24.09", 00:14:59.100 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:59.100 "oacs": { 00:14:59.100 "security": 0, 00:14:59.100 "format": 0, 00:14:59.100 "firmware": 0, 00:14:59.100 "ns_manage": 0 00:14:59.100 }, 00:14:59.100 "multi_ctrlr": true, 00:14:59.100 "ana_reporting": false 00:14:59.100 }, 00:14:59.100 "vs": { 00:14:59.100 "nvme_version": "1.3" 00:14:59.100 }, 00:14:59.101 "ns_data": { 00:14:59.101 "id": 1, 00:14:59.101 "can_share": true 00:14:59.101 } 00:14:59.101 } 00:14:59.101 ], 00:14:59.101 "mp_policy": "active_passive" 00:14:59.101 } 00:14:59.101 } 00:14:59.101 ] 00:14:59.101 16:06:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2230695 00:14:59.101 16:06:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:59.101 16:06:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:59.101 Running I/O for 10 seconds... 00:15:00.042 Latency(us) 00:15:00.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.042 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.042 Nvme0n1 : 1.00 17644.00 68.92 0.00 0.00 0.00 0.00 0.00 00:15:00.042 =================================================================================================================== 00:15:00.042 Total : 17644.00 68.92 0.00 0.00 0.00 0.00 0.00 00:15:00.042 00:15:00.982 16:06:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a65b00ec-ff79-4a04-a649-5cf2a062196e 00:15:01.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.242 Nvme0n1 : 2.00 17714.00 69.20 0.00 0.00 0.00 0.00 0.00 00:15:01.242 =================================================================================================================== 00:15:01.242 Total : 17714.00 69.20 0.00 0.00 0.00 0.00 0.00 00:15:01.242 00:15:01.242 true 00:15:01.242 16:06:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a65b00ec-ff79-4a04-a649-5cf2a062196e 00:15:01.242 16:06:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:01.242 16:06:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:01.242 16:06:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:01.242 16:06:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2230695 00:15:02.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.182 Nvme0n1 : 3.00 17742.67 69.31 0.00 0.00 0.00 0.00 0.00 00:15:02.182 =================================================================================================================== 00:15:02.182 Total : 17742.67 69.31 0.00 0.00 0.00 0.00 0.00 00:15:02.182 00:15:03.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.123 Nvme0n1 : 4.00 17771.00 69.42 0.00 0.00 0.00 0.00 0.00 00:15:03.123 =================================================================================================================== 00:15:03.123 Total : 17771.00 69.42 0.00 0.00 0.00 0.00 0.00 00:15:03.123 00:15:04.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.062 Nvme0n1 : 5.00 17789.60 69.49 0.00 0.00 0.00 0.00 0.00 00:15:04.062 =================================================================================================================== 00:15:04.062 Total : 17789.60 69.49 0.00 0.00 0.00 0.00 0.00 00:15:04.062 00:15:05.002 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.002 Nvme0n1 : 6.00 17803.33 69.54 0.00 0.00 0.00 0.00 0.00 00:15:05.002 =================================================================================================================== 00:15:05.002 Total : 17803.33 69.54 0.00 0.00 0.00 0.00 0.00 00:15:05.002 00:15:06.386 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.386 Nvme0n1 : 7.00 17818.86 69.60 0.00 0.00 0.00 0.00 0.00 00:15:06.386 =================================================================================================================== 00:15:06.386 Total : 17818.86 69.60 0.00 0.00 0.00 0.00 0.00 00:15:06.386 00:15:07.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.328 Nvme0n1 : 8.00 17829.50 69.65 0.00 0.00 0.00 0.00 0.00 00:15:07.328 =================================================================================================================== 00:15:07.328 Total : 17829.50 69.65 0.00 0.00 0.00 0.00 0.00 00:15:07.328 00:15:08.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.271 Nvme0n1 : 9.00 17840.44 69.69 0.00 0.00 0.00 0.00 0.00 00:15:08.271 =================================================================================================================== 00:15:08.271 Total : 17840.44 69.69 0.00 0.00 0.00 0.00 0.00 00:15:08.271 00:15:09.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.247 Nvme0n1 : 10.00 17850.00 69.73 0.00 0.00 0.00 0.00 0.00 00:15:09.247 =================================================================================================================== 00:15:09.247 Total : 17850.00 69.73 0.00 0.00 0.00 0.00 0.00 00:15:09.247 00:15:09.247 00:15:09.247 Latency(us) 00:15:09.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.247 Nvme0n1 : 10.01 17850.65 69.73 0.00 0.00 7165.78 1665.71 9175.04 00:15:09.247 =================================================================================================================== 00:15:09.247 Total : 17850.65 69.73 0.00 0.00 7165.78 1665.71 9175.04 00:15:09.247 0 00:15:09.247 16:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2230362 00:15:09.247 16:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2230362 ']' 00:15:09.247 16:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2230362 00:15:09.247 16:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:09.247 16:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:09.247 16:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2230362 00:15:09.247 16:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:09.247 16:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:09.247 16:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2230362' 00:15:09.247 killing process with pid 2230362 00:15:09.247 16:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2230362 00:15:09.247 Received shutdown signal, test time was about 10.000000 seconds 00:15:09.247 00:15:09.247 Latency(us) 00:15:09.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.247 =================================================================================================================== 00:15:09.247 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:09.247 16:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2230362 00:15:09.247 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:09.507 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a65b00ec-ff79-4a04-a649-5cf2a062196e 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2226888 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2226888 00:15:09.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2226888 Killed "${NVMF_APP[@]}" "$@" 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2232723 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2232723 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2232723 ']' 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.768 16:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:10.028 [2024-07-15 16:06:45.614670] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:15:10.028 [2024-07-15 16:06:45.614723] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.028 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.028 [2024-07-15 16:06:45.681198] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.028 [2024-07-15 16:06:45.747818] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.028 [2024-07-15 16:06:45.747855] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.028 [2024-07-15 16:06:45.747863] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.028 [2024-07-15 16:06:45.747870] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.028 [2024-07-15 16:06:45.747875] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.028 [2024-07-15 16:06:45.747895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.601 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.601 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:10.601 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:10.601 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:10.601 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:10.601 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.601 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:10.862 [2024-07-15 16:06:46.548558] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:10.862 [2024-07-15 16:06:46.548644] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:10.862 [2024-07-15 16:06:46.548672] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:10.862 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:10.862 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 56d7ff48-2c72-4d80-8e80-4389f627eb91 00:15:10.862 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=56d7ff48-2c72-4d80-8e80-4389f627eb91 00:15:10.862 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:10.862 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:10.862 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:10.862 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:10.862 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:11.122 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 56d7ff48-2c72-4d80-8e80-4389f627eb91 -t 2000 00:15:11.122 [ 00:15:11.122 { 00:15:11.122 "name": "56d7ff48-2c72-4d80-8e80-4389f627eb91", 00:15:11.122 "aliases": [ 00:15:11.122 "lvs/lvol" 00:15:11.122 ], 00:15:11.122 "product_name": "Logical Volume", 00:15:11.122 "block_size": 4096, 00:15:11.122 "num_blocks": 38912, 00:15:11.122 "uuid": "56d7ff48-2c72-4d80-8e80-4389f627eb91", 00:15:11.122 "assigned_rate_limits": { 00:15:11.122 "rw_ios_per_sec": 0, 00:15:11.122 "rw_mbytes_per_sec": 0, 00:15:11.122 "r_mbytes_per_sec": 0, 00:15:11.122 "w_mbytes_per_sec": 0 00:15:11.122 }, 00:15:11.122 "claimed": false, 00:15:11.122 "zoned": false, 00:15:11.122 "supported_io_types": { 00:15:11.122 "read": true, 00:15:11.122 "write": true, 00:15:11.122 "unmap": true, 00:15:11.122 "flush": false, 00:15:11.122 "reset": true, 00:15:11.122 "nvme_admin": false, 00:15:11.122 "nvme_io": false, 00:15:11.122 "nvme_io_md": false, 00:15:11.122 "write_zeroes": true, 00:15:11.122 "zcopy": false, 00:15:11.122 "get_zone_info": false, 00:15:11.122 "zone_management": false, 00:15:11.122 "zone_append": false, 00:15:11.122 "compare": false, 00:15:11.122 "compare_and_write": false, 00:15:11.122 "abort": false, 00:15:11.122 "seek_hole": true, 00:15:11.122 "seek_data": true, 00:15:11.122 "copy": false, 00:15:11.122 "nvme_iov_md": false 00:15:11.122 }, 00:15:11.122 "driver_specific": { 00:15:11.122 "lvol": { 00:15:11.122 "lvol_store_uuid": "a65b00ec-ff79-4a04-a649-5cf2a062196e", 00:15:11.122 "base_bdev": "aio_bdev", 00:15:11.122 "thin_provision": false, 00:15:11.122 "num_allocated_clusters": 38, 00:15:11.122 "snapshot": false, 00:15:11.122 "clone": false, 00:15:11.122 "esnap_clone": false 00:15:11.122 } 00:15:11.122 } 00:15:11.122 } 00:15:11.122 ] 00:15:11.122 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:11.122 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a65b00ec-ff79-4a04-a649-5cf2a062196e 00:15:11.122 16:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:11.384 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:11.384 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a65b00ec-ff79-4a04-a649-5cf2a062196e 00:15:11.384 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:11.384 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:11.384 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:11.645 [2024-07-15 16:06:47.332591] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:11.645 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a65b00ec-ff79-4a04-a649-5cf2a062196e 00:15:11.645 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:11.645 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a65b00ec-ff79-4a04-a649-5cf2a062196e 00:15:11.645 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.645 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.645 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.645 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.645 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.645 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.645 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.645 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:11.645 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a65b00ec-ff79-4a04-a649-5cf2a062196e 00:15:11.906 request: 00:15:11.906 { 00:15:11.906 "uuid": "a65b00ec-ff79-4a04-a649-5cf2a062196e", 00:15:11.906 "method": "bdev_lvol_get_lvstores", 00:15:11.906 "req_id": 1 00:15:11.906 } 00:15:11.906 Got JSON-RPC error response 00:15:11.906 response: 00:15:11.906 { 00:15:11.906 "code": -19, 00:15:11.906 "message": "No such device" 00:15:11.906 } 00:15:11.906 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:11.906 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:11.906 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:11.906 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:11.906 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:11.907 aio_bdev 00:15:11.907 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 56d7ff48-2c72-4d80-8e80-4389f627eb91 00:15:11.907 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=56d7ff48-2c72-4d80-8e80-4389f627eb91 00:15:11.907 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:11.907 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:11.907 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:11.907 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:11.907 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:12.168 16:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 56d7ff48-2c72-4d80-8e80-4389f627eb91 -t 2000 00:15:12.168 [ 00:15:12.168 { 00:15:12.168 "name": "56d7ff48-2c72-4d80-8e80-4389f627eb91", 00:15:12.168 "aliases": [ 00:15:12.168 "lvs/lvol" 00:15:12.168 ], 00:15:12.168 "product_name": "Logical Volume", 00:15:12.168 "block_size": 4096, 00:15:12.168 "num_blocks": 38912, 00:15:12.168 "uuid": "56d7ff48-2c72-4d80-8e80-4389f627eb91", 00:15:12.168 "assigned_rate_limits": { 00:15:12.168 "rw_ios_per_sec": 0, 00:15:12.168 "rw_mbytes_per_sec": 0, 00:15:12.168 "r_mbytes_per_sec": 0, 00:15:12.168 "w_mbytes_per_sec": 0 00:15:12.168 }, 00:15:12.168 "claimed": false, 00:15:12.168 "zoned": false, 00:15:12.168 "supported_io_types": { 00:15:12.168 "read": true, 00:15:12.168 "write": true, 00:15:12.168 "unmap": true, 00:15:12.168 "flush": false, 00:15:12.168 "reset": true, 00:15:12.168 "nvme_admin": false, 00:15:12.168 "nvme_io": false, 00:15:12.168 "nvme_io_md": false, 00:15:12.168 "write_zeroes": true, 00:15:12.168 "zcopy": false, 00:15:12.168 "get_zone_info": false, 00:15:12.168 "zone_management": false, 00:15:12.168 "zone_append": false, 00:15:12.168 "compare": false, 00:15:12.168 "compare_and_write": false, 00:15:12.168 "abort": false, 00:15:12.168 "seek_hole": true, 00:15:12.168 "seek_data": true, 00:15:12.168 "copy": false, 00:15:12.168 "nvme_iov_md": false 00:15:12.168 }, 00:15:12.168 "driver_specific": { 00:15:12.168 "lvol": { 00:15:12.168 "lvol_store_uuid": "a65b00ec-ff79-4a04-a649-5cf2a062196e", 00:15:12.168 "base_bdev": "aio_bdev", 00:15:12.168 "thin_provision": false, 00:15:12.168 "num_allocated_clusters": 38, 00:15:12.168 "snapshot": false, 00:15:12.168 "clone": false, 00:15:12.168 "esnap_clone": false 00:15:12.168 } 00:15:12.168 } 00:15:12.168 } 00:15:12.168 ] 00:15:12.168 16:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:12.168 16:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a65b00ec-ff79-4a04-a649-5cf2a062196e 00:15:12.169 16:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:12.429 16:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:12.429 16:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a65b00ec-ff79-4a04-a649-5cf2a062196e 00:15:12.429 16:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:12.690 16:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:12.690 16:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 56d7ff48-2c72-4d80-8e80-4389f627eb91 00:15:12.690 16:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a65b00ec-ff79-4a04-a649-5cf2a062196e 00:15:12.950 16:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:12.950 16:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:13.211 00:15:13.211 real 0m16.839s 00:15:13.211 user 0m43.739s 00:15:13.211 sys 0m3.166s 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:13.211 ************************************ 00:15:13.211 END TEST lvs_grow_dirty 00:15:13.211 ************************************ 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:13.211 nvmf_trace.0 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:13.211 rmmod nvme_tcp 00:15:13.211 rmmod nvme_fabrics 00:15:13.211 rmmod nvme_keyring 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2232723 ']' 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2232723 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2232723 ']' 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2232723 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:13.211 16:06:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2232723 00:15:13.211 16:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:13.211 16:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:13.211 16:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2232723' 00:15:13.211 killing process with pid 2232723 00:15:13.211 16:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2232723 00:15:13.211 16:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2232723 00:15:13.472 16:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:13.472 16:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:13.472 16:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:13.472 16:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:13.472 16:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:13.472 16:06:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.472 16:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.472 16:06:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.020 16:06:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:16.020 00:15:16.020 real 0m42.674s 00:15:16.020 user 1m4.201s 00:15:16.020 sys 0m10.356s 00:15:16.020 16:06:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:16.020 16:06:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:16.020 ************************************ 00:15:16.020 END TEST nvmf_lvs_grow 00:15:16.020 ************************************ 00:15:16.020 16:06:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:16.020 16:06:51 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:16.020 16:06:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:16.020 16:06:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.020 16:06:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:16.020 ************************************ 00:15:16.020 START TEST nvmf_bdev_io_wait 00:15:16.020 ************************************ 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:16.020 * Looking for test storage... 00:15:16.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.020 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:16.021 16:06:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.611 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:22.612 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:22.612 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:22.612 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:22.612 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:22.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:15:22.612 00:15:22.612 --- 10.0.0.2 ping statistics --- 00:15:22.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.612 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.438 ms 00:15:22.612 00:15:22.612 --- 10.0.0.1 ping statistics --- 00:15:22.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.612 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:22.612 16:06:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:22.612 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:22.612 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:22.612 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:22.612 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:22.612 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2237544 00:15:22.612 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2237544 00:15:22.612 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2237544 ']' 00:15:22.612 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.612 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:22.612 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.612 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:22.612 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:22.612 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:22.612 [2024-07-15 16:06:58.075878] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:15:22.612 [2024-07-15 16:06:58.075980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.612 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.612 [2024-07-15 16:06:58.148752] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.612 [2024-07-15 16:06:58.225719] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.612 [2024-07-15 16:06:58.225760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.612 [2024-07-15 16:06:58.225767] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.612 [2024-07-15 16:06:58.225774] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.612 [2024-07-15 16:06:58.225779] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.613 [2024-07-15 16:06:58.225920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.613 [2024-07-15 16:06:58.226042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.613 [2024-07-15 16:06:58.226202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.613 [2024-07-15 16:06:58.226203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:23.184 [2024-07-15 16:06:58.950139] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:23.184 Malloc0 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.184 16:06:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:23.184 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.184 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.184 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.184 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:23.184 [2024-07-15 16:06:59.018402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.184 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.184 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2237799 00:15:23.479 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2237801 00:15:23.479 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:23.479 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:23.479 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:23.479 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:23.480 { 00:15:23.480 "params": { 00:15:23.480 "name": "Nvme$subsystem", 00:15:23.480 "trtype": "$TEST_TRANSPORT", 00:15:23.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:23.480 "adrfam": "ipv4", 00:15:23.480 "trsvcid": "$NVMF_PORT", 00:15:23.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:23.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:23.480 "hdgst": ${hdgst:-false}, 00:15:23.480 "ddgst": ${ddgst:-false} 00:15:23.480 }, 00:15:23.480 "method": "bdev_nvme_attach_controller" 00:15:23.480 } 00:15:23.480 EOF 00:15:23.480 )") 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2237803 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2237806 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:23.480 { 00:15:23.480 "params": { 00:15:23.480 "name": "Nvme$subsystem", 00:15:23.480 "trtype": "$TEST_TRANSPORT", 00:15:23.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:23.480 "adrfam": "ipv4", 00:15:23.480 "trsvcid": "$NVMF_PORT", 00:15:23.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:23.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:23.480 "hdgst": ${hdgst:-false}, 00:15:23.480 "ddgst": ${ddgst:-false} 00:15:23.480 }, 00:15:23.480 "method": "bdev_nvme_attach_controller" 00:15:23.480 } 00:15:23.480 EOF 00:15:23.480 )") 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:23.480 { 00:15:23.480 "params": { 00:15:23.480 "name": "Nvme$subsystem", 00:15:23.480 "trtype": "$TEST_TRANSPORT", 00:15:23.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:23.480 "adrfam": "ipv4", 00:15:23.480 "trsvcid": "$NVMF_PORT", 00:15:23.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:23.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:23.480 "hdgst": ${hdgst:-false}, 00:15:23.480 "ddgst": ${ddgst:-false} 00:15:23.480 }, 00:15:23.480 "method": "bdev_nvme_attach_controller" 00:15:23.480 } 00:15:23.480 EOF 00:15:23.480 )") 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:23.480 { 00:15:23.480 "params": { 00:15:23.480 "name": "Nvme$subsystem", 00:15:23.480 "trtype": "$TEST_TRANSPORT", 00:15:23.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:23.480 "adrfam": "ipv4", 00:15:23.480 "trsvcid": "$NVMF_PORT", 00:15:23.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:23.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:23.480 "hdgst": ${hdgst:-false}, 00:15:23.480 "ddgst": ${ddgst:-false} 00:15:23.480 }, 00:15:23.480 "method": "bdev_nvme_attach_controller" 00:15:23.480 } 00:15:23.480 EOF 00:15:23.480 )") 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2237799 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:23.480 "params": { 00:15:23.480 "name": "Nvme1", 00:15:23.480 "trtype": "tcp", 00:15:23.480 "traddr": "10.0.0.2", 00:15:23.480 "adrfam": "ipv4", 00:15:23.480 "trsvcid": "4420", 00:15:23.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:23.480 "hdgst": false, 00:15:23.480 "ddgst": false 00:15:23.480 }, 00:15:23.480 "method": "bdev_nvme_attach_controller" 00:15:23.480 }' 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:23.480 "params": { 00:15:23.480 "name": "Nvme1", 00:15:23.480 "trtype": "tcp", 00:15:23.480 "traddr": "10.0.0.2", 00:15:23.480 "adrfam": "ipv4", 00:15:23.480 "trsvcid": "4420", 00:15:23.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:23.480 "hdgst": false, 00:15:23.480 "ddgst": false 00:15:23.480 }, 00:15:23.480 "method": "bdev_nvme_attach_controller" 00:15:23.480 }' 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:23.480 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:23.480 "params": { 00:15:23.480 "name": "Nvme1", 00:15:23.480 "trtype": "tcp", 00:15:23.480 "traddr": "10.0.0.2", 00:15:23.480 "adrfam": "ipv4", 00:15:23.480 "trsvcid": "4420", 00:15:23.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:23.481 "hdgst": false, 00:15:23.481 "ddgst": false 00:15:23.481 }, 00:15:23.481 "method": "bdev_nvme_attach_controller" 00:15:23.481 }' 00:15:23.481 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:23.481 16:06:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:23.481 "params": { 00:15:23.481 "name": "Nvme1", 00:15:23.481 "trtype": "tcp", 00:15:23.481 "traddr": "10.0.0.2", 00:15:23.481 "adrfam": "ipv4", 00:15:23.481 "trsvcid": "4420", 00:15:23.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:23.481 "hdgst": false, 00:15:23.481 "ddgst": false 00:15:23.481 }, 00:15:23.481 "method": "bdev_nvme_attach_controller" 00:15:23.481 }' 00:15:23.481 [2024-07-15 16:06:59.070729] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:15:23.481 [2024-07-15 16:06:59.070784] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:23.481 [2024-07-15 16:06:59.072907] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:15:23.481 [2024-07-15 16:06:59.072954] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:23.481 [2024-07-15 16:06:59.074038] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:15:23.481 [2024-07-15 16:06:59.074086] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:23.481 [2024-07-15 16:06:59.075353] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:15:23.481 [2024-07-15 16:06:59.075398] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:23.481 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.481 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.481 [2024-07-15 16:06:59.214537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.481 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.481 [2024-07-15 16:06:59.265239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:23.481 [2024-07-15 16:06:59.275673] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.481 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.481 [2024-07-15 16:06:59.304797] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.741 [2024-07-15 16:06:59.327802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:23.741 [2024-07-15 16:06:59.355046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:23.741 [2024-07-15 16:06:59.370822] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.741 [2024-07-15 16:06:59.419975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:23.741 Running I/O for 1 seconds... 00:15:23.741 Running I/O for 1 seconds... 00:15:24.001 Running I/O for 1 seconds... 00:15:24.001 Running I/O for 1 seconds... 00:15:24.942 00:15:24.942 Latency(us) 00:15:24.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.942 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:24.942 Nvme1n1 : 1.00 14750.43 57.62 0.00 0.00 8653.63 5079.04 14854.83 00:15:24.942 =================================================================================================================== 00:15:24.942 Total : 14750.43 57.62 0.00 0.00 8653.63 5079.04 14854.83 00:15:24.942 00:15:24.942 Latency(us) 00:15:24.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.942 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:24.942 Nvme1n1 : 1.01 11612.30 45.36 0.00 0.00 10980.93 6799.36 22173.01 00:15:24.942 =================================================================================================================== 00:15:24.942 Total : 11612.30 45.36 0.00 0.00 10980.93 6799.36 22173.01 00:15:24.942 00:15:24.942 Latency(us) 00:15:24.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.942 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:24.943 Nvme1n1 : 1.00 16940.37 66.17 0.00 0.00 7536.42 3850.24 15182.51 00:15:24.943 =================================================================================================================== 00:15:24.943 Total : 16940.37 66.17 0.00 0.00 7536.42 3850.24 15182.51 00:15:24.943 00:15:24.943 Latency(us) 00:15:24.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.943 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:24.943 Nvme1n1 : 1.00 188120.04 734.84 0.00 0.00 677.56 273.07 1153.71 00:15:24.943 =================================================================================================================== 00:15:24.943 Total : 188120.04 734.84 0.00 0.00 677.56 273.07 1153.71 00:15:24.943 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2237801 00:15:24.943 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2237803 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2237806 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:25.203 rmmod nvme_tcp 00:15:25.203 rmmod nvme_fabrics 00:15:25.203 rmmod nvme_keyring 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2237544 ']' 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2237544 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2237544 ']' 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2237544 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2237544 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2237544' 00:15:25.203 killing process with pid 2237544 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2237544 00:15:25.203 16:07:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2237544 00:15:25.463 16:07:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:25.463 16:07:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:25.463 16:07:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:25.463 16:07:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.463 16:07:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:25.463 16:07:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.463 16:07:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.463 16:07:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.377 16:07:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:27.377 00:15:27.377 real 0m11.812s 00:15:27.377 user 0m18.488s 00:15:27.377 sys 0m6.366s 00:15:27.377 16:07:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:27.377 16:07:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:27.377 ************************************ 00:15:27.377 END TEST nvmf_bdev_io_wait 00:15:27.377 ************************************ 00:15:27.377 16:07:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:27.377 16:07:03 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:27.377 16:07:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:27.377 16:07:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.377 16:07:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:27.638 ************************************ 00:15:27.638 START TEST nvmf_queue_depth 00:15:27.638 ************************************ 00:15:27.638 16:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:27.638 * Looking for test storage... 00:15:27.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.638 16:07:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.638 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:27.638 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.638 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.638 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.638 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.638 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.638 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.638 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:27.639 16:07:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:34.334 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:34.334 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:34.334 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:34.334 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:34.334 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:34.335 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:34.335 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:34.335 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.335 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.335 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:34.335 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:34.335 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:34.335 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:34.335 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:34.335 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:34.335 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.335 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:34.335 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:34.335 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:34.335 16:07:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:34.335 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:34.335 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:34.335 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:34.335 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:34.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:15:34.595 00:15:34.595 --- 10.0.0.2 ping statistics --- 00:15:34.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.595 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:34.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.427 ms 00:15:34.595 00:15:34.595 --- 10.0.0.1 ping statistics --- 00:15:34.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.595 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2242340 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2242340 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2242340 ']' 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:34.595 16:07:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:34.595 [2024-07-15 16:07:10.390336] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:15:34.595 [2024-07-15 16:07:10.390397] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.595 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.855 [2024-07-15 16:07:10.476321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.856 [2024-07-15 16:07:10.562930] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.856 [2024-07-15 16:07:10.562994] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.856 [2024-07-15 16:07:10.563002] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.856 [2024-07-15 16:07:10.563009] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.856 [2024-07-15 16:07:10.563015] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.856 [2024-07-15 16:07:10.563054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:35.429 [2024-07-15 16:07:11.212269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:35.429 Malloc0 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.429 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:35.690 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.690 16:07:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.690 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.690 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:35.690 [2024-07-15 16:07:11.281936] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.690 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.690 16:07:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2242506 00:15:35.690 16:07:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:35.690 16:07:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2242506 /var/tmp/bdevperf.sock 00:15:35.690 16:07:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:35.690 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2242506 ']' 00:15:35.690 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:35.690 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.690 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:35.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:35.690 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.690 16:07:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:35.690 [2024-07-15 16:07:11.337695] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:15:35.690 [2024-07-15 16:07:11.337760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2242506 ] 00:15:35.690 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.690 [2024-07-15 16:07:11.401222] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.690 [2024-07-15 16:07:11.475529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.631 16:07:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.631 16:07:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:36.631 16:07:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:36.631 16:07:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.631 16:07:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:36.631 NVMe0n1 00:15:36.631 16:07:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.631 16:07:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:36.631 Running I/O for 10 seconds... 00:15:46.635 00:15:46.635 Latency(us) 00:15:46.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.635 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:46.635 Verification LBA range: start 0x0 length 0x4000 00:15:46.635 NVMe0n1 : 10.04 11735.76 45.84 0.00 0.00 86967.07 4860.59 69468.16 00:15:46.635 =================================================================================================================== 00:15:46.635 Total : 11735.76 45.84 0.00 0.00 86967.07 4860.59 69468.16 00:15:46.635 0 00:15:46.635 16:07:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2242506 00:15:46.636 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2242506 ']' 00:15:46.636 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2242506 00:15:46.636 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:46.636 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:46.636 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2242506 00:15:46.636 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:46.636 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:46.636 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2242506' 00:15:46.636 killing process with pid 2242506 00:15:46.636 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2242506 00:15:46.636 Received shutdown signal, test time was about 10.000000 seconds 00:15:46.636 00:15:46.636 Latency(us) 00:15:46.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.636 =================================================================================================================== 00:15:46.636 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:46.636 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2242506 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:46.897 rmmod nvme_tcp 00:15:46.897 rmmod nvme_fabrics 00:15:46.897 rmmod nvme_keyring 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2242340 ']' 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2242340 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2242340 ']' 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2242340 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2242340 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2242340' 00:15:46.897 killing process with pid 2242340 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2242340 00:15:46.897 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2242340 00:15:47.184 16:07:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:47.184 16:07:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:47.184 16:07:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:47.184 16:07:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.184 16:07:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:47.184 16:07:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.184 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.184 16:07:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.098 16:07:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:49.098 00:15:49.098 real 0m21.659s 00:15:49.098 user 0m25.466s 00:15:49.098 sys 0m6.263s 00:15:49.098 16:07:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:49.098 16:07:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:49.098 ************************************ 00:15:49.098 END TEST nvmf_queue_depth 00:15:49.098 ************************************ 00:15:49.098 16:07:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:49.098 16:07:24 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:49.098 16:07:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:49.098 16:07:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.098 16:07:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:49.359 ************************************ 00:15:49.359 START TEST nvmf_target_multipath 00:15:49.359 ************************************ 00:15:49.359 16:07:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:49.359 * Looking for test storage... 00:15:49.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.359 16:07:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.359 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:49.359 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.359 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.359 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.359 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.359 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.359 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:49.360 16:07:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.954 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:56.215 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:56.215 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:56.215 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:56.215 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:56.215 16:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.216 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.216 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:56.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:15:56.476 00:15:56.476 --- 10.0.0.2 ping statistics --- 00:15:56.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.476 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.439 ms 00:15:56.476 00:15:56.476 --- 10.0.0.1 ping statistics --- 00:15:56.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.476 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:56.476 only one NIC for nvmf test 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:56.476 rmmod nvme_tcp 00:15:56.476 rmmod nvme_fabrics 00:15:56.476 rmmod nvme_keyring 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.476 16:07:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.444 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:58.444 16:07:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:58.444 16:07:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:58.444 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:58.444 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:58.444 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:58.444 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:58.444 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:58.444 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:58.444 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:58.444 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:58.444 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:58.444 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:58.444 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:58.723 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:58.723 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:58.723 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:58.723 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:58.723 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.723 16:07:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.723 16:07:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.723 16:07:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:58.723 00:15:58.723 real 0m9.315s 00:15:58.723 user 0m1.992s 00:15:58.723 sys 0m5.221s 00:15:58.723 16:07:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:58.723 16:07:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:58.723 ************************************ 00:15:58.723 END TEST nvmf_target_multipath 00:15:58.723 ************************************ 00:15:58.723 16:07:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:58.723 16:07:34 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:58.723 16:07:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:58.723 16:07:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.723 16:07:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:58.723 ************************************ 00:15:58.723 START TEST nvmf_zcopy 00:15:58.723 ************************************ 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:58.723 * Looking for test storage... 00:15:58.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:58.723 16:07:34 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:58.724 16:07:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:06.872 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:06.872 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:06.872 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:06.872 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:06.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:16:06.872 00:16:06.872 --- 10.0.0.2 ping statistics --- 00:16:06.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.872 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:16:06.872 00:16:06.872 --- 10.0.0.1 ping statistics --- 00:16:06.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.872 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:06.872 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:06.873 16:07:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:06.873 16:07:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:06.873 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2253152 00:16:06.873 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2253152 00:16:06.873 16:07:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:06.873 16:07:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2253152 ']' 00:16:06.873 16:07:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.873 16:07:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.873 16:07:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.873 16:07:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.873 16:07:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:06.873 [2024-07-15 16:07:41.848481] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:16:06.873 [2024-07-15 16:07:41.848545] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.873 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.873 [2024-07-15 16:07:41.934592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.873 [2024-07-15 16:07:42.027339] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.873 [2024-07-15 16:07:42.027401] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.873 [2024-07-15 16:07:42.027408] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.873 [2024-07-15 16:07:42.027415] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.873 [2024-07-15 16:07:42.027421] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.873 [2024-07-15 16:07:42.027447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:06.873 [2024-07-15 16:07:42.691324] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:06.873 [2024-07-15 16:07:42.707523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.873 16:07:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:07.134 malloc0 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:07.134 { 00:16:07.134 "params": { 00:16:07.134 "name": "Nvme$subsystem", 00:16:07.134 "trtype": "$TEST_TRANSPORT", 00:16:07.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:07.134 "adrfam": "ipv4", 00:16:07.134 "trsvcid": "$NVMF_PORT", 00:16:07.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:07.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:07.134 "hdgst": ${hdgst:-false}, 00:16:07.134 "ddgst": ${ddgst:-false} 00:16:07.134 }, 00:16:07.134 "method": "bdev_nvme_attach_controller" 00:16:07.134 } 00:16:07.134 EOF 00:16:07.134 )") 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:07.134 16:07:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:07.134 "params": { 00:16:07.134 "name": "Nvme1", 00:16:07.134 "trtype": "tcp", 00:16:07.134 "traddr": "10.0.0.2", 00:16:07.134 "adrfam": "ipv4", 00:16:07.134 "trsvcid": "4420", 00:16:07.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:07.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:07.134 "hdgst": false, 00:16:07.134 "ddgst": false 00:16:07.134 }, 00:16:07.134 "method": "bdev_nvme_attach_controller" 00:16:07.134 }' 00:16:07.134 [2024-07-15 16:07:42.796301] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:16:07.134 [2024-07-15 16:07:42.796366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253185 ] 00:16:07.134 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.134 [2024-07-15 16:07:42.860794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.134 [2024-07-15 16:07:42.936139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.705 Running I/O for 10 seconds... 00:16:17.706 00:16:17.707 Latency(us) 00:16:17.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.707 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:17.707 Verification LBA range: start 0x0 length 0x1000 00:16:17.707 Nvme1n1 : 10.01 9700.89 75.79 0.00 0.00 13143.12 1884.16 32768.00 00:16:17.707 =================================================================================================================== 00:16:17.707 Total : 9700.89 75.79 0.00 0.00 13143.12 1884.16 32768.00 00:16:17.707 16:07:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2255338 00:16:17.707 16:07:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:17.707 16:07:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:17.707 16:07:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:17.707 16:07:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:17.707 16:07:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:17.707 16:07:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:17.707 16:07:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:17.707 16:07:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:17.707 { 00:16:17.707 "params": { 00:16:17.707 "name": "Nvme$subsystem", 00:16:17.707 "trtype": "$TEST_TRANSPORT", 00:16:17.707 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:17.707 "adrfam": "ipv4", 00:16:17.707 "trsvcid": "$NVMF_PORT", 00:16:17.707 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:17.707 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:17.707 "hdgst": ${hdgst:-false}, 00:16:17.707 "ddgst": ${ddgst:-false} 00:16:17.707 }, 00:16:17.707 "method": "bdev_nvme_attach_controller" 00:16:17.707 } 00:16:17.707 EOF 00:16:17.707 )") 00:16:17.707 [2024-07-15 16:07:53.413946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.707 [2024-07-15 16:07:53.413973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.707 16:07:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:17.707 16:07:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:17.707 [2024-07-15 16:07:53.421937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.707 [2024-07-15 16:07:53.421949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.707 16:07:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:17.707 16:07:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:17.707 "params": { 00:16:17.707 "name": "Nvme1", 00:16:17.707 "trtype": "tcp", 00:16:17.707 "traddr": "10.0.0.2", 00:16:17.707 "adrfam": "ipv4", 00:16:17.707 "trsvcid": "4420", 00:16:17.707 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.707 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:17.707 "hdgst": false, 00:16:17.707 "ddgst": false 00:16:17.707 }, 00:16:17.707 "method": "bdev_nvme_attach_controller" 00:16:17.707 }' 00:16:17.707 [2024-07-15 16:07:53.429952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.707 [2024-07-15 16:07:53.429960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.707 [2024-07-15 16:07:53.437972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.707 [2024-07-15 16:07:53.437979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.707 [2024-07-15 16:07:53.445993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.707 [2024-07-15 16:07:53.446000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.707 [2024-07-15 16:07:53.454014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.707 [2024-07-15 16:07:53.454021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.707 [2024-07-15 16:07:53.456891] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:16:17.707 [2024-07-15 16:07:53.456936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255338 ] 00:16:17.707 [2024-07-15 16:07:53.462035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.707 [2024-07-15 16:07:53.462042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.707 [2024-07-15 16:07:53.470054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.707 [2024-07-15 16:07:53.470062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.707 [2024-07-15 16:07:53.478075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.707 [2024-07-15 16:07:53.478082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.707 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.707 [2024-07-15 16:07:53.486097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.707 [2024-07-15 16:07:53.486104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.707 [2024-07-15 16:07:53.494118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.707 [2024-07-15 16:07:53.494130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.707 [2024-07-15 16:07:53.502142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.707 [2024-07-15 16:07:53.502149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.707 [2024-07-15 16:07:53.510163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.707 [2024-07-15 16:07:53.510170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.707 [2024-07-15 16:07:53.514821] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.707 [2024-07-15 16:07:53.518183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.707 [2024-07-15 16:07:53.518191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.707 [2024-07-15 16:07:53.526205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.707 [2024-07-15 16:07:53.526213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.707 [2024-07-15 16:07:53.534227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.707 [2024-07-15 16:07:53.534235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.707 [2024-07-15 16:07:53.542249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.707 [2024-07-15 16:07:53.542257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.967 [2024-07-15 16:07:53.550269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.550280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.558290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.558299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.566310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.566318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.574331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.574338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.580459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.968 [2024-07-15 16:07:53.582351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.582358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.590373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.590380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.598398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.598412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.606415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.606424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.614437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.614445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.622457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.622465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.630477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.630485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.638499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.638506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.646518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.646525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.654547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.654561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.662564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.662574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.670584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.670592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.678604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.678613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.686625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.686640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.694647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.694656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.702666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.702673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.710695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.710708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.718710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.718718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 Running I/O for 5 seconds... 00:16:17.968 [2024-07-15 16:07:53.726732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.726739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.737001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.737016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.744870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.744885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.754199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.754215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.762110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.762129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.771129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.771143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.779790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.779804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.788511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.788526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.796722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.796737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.968 [2024-07-15 16:07:53.805521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.968 [2024-07-15 16:07:53.805535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.814091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.814105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.822984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.822998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.831818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.831832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.840699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.840713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.849733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.849750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.858838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.858853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.867298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.867313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.876105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.876119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.885376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.885390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.893907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.893921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.902507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.902521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.911112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.911131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.919736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.919751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.928767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.928781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.937691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.937705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.946622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.946636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.955518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.955532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.964401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.964415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.972825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.972839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.981180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.981194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.990490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.990504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:53.999394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:53.999409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:54.008559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:54.008574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:54.017532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:54.017546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:54.026539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:54.026553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:54.034985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:54.034999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:54.043709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:54.043723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:54.052339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:54.052353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:54.061329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:54.061343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.229 [2024-07-15 16:07:54.069954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.229 [2024-07-15 16:07:54.069968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.079014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.079028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.087468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.087482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.096081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.096096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.104696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.104710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.113202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.113216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.121960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.121974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.130869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.130883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.139194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.139208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.147857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.147870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.156749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.156763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.165879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.165893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.174374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.174388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.183087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.183101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.191641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.191655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.200602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.200615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.209178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.209191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.217780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.217794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.226716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.226730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.235787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.235800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.244834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.244848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.253236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.253250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.262300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.262314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.271130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.271144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.279624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.279639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.288024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.288038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.296769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.296783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.305360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.305374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.313992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.314006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.490 [2024-07-15 16:07:54.322774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.490 [2024-07-15 16:07:54.322788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.491 [2024-07-15 16:07:54.331544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.491 [2024-07-15 16:07:54.331558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.339739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.339753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.348620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.348634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.357257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.357272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.365809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.365823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.375033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.375047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.383562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.383576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.391769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.391782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.400662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.400675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.409515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.409530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.418674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.418688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.427020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.427034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.436072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.436086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.444584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.444598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.453471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.453485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.462226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.462240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.471294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.471308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.479645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.479658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.488623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.488638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.496941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.496955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.505929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.505945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.514557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.514570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.523224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.523237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.532210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.532223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.541071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.541084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.549925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.549939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.558370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.558385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.566891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.566906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.575469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.575483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.583595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.583608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.751 [2024-07-15 16:07:54.591953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.751 [2024-07-15 16:07:54.591967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.600877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.600891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.609465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.609479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.618236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.618250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.627263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.627276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.635676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.635690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.644337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.644351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.652418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.652432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.660711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.660725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.669322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.669339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.678051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.678064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.686522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.686536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.695427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.695441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.703976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.703989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.713006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.713020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.721585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.721599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.730352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.730365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.739113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.739131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.747708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.747722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.756431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.756445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.764721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.764736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.773444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.773458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.781923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.781937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.790475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.790489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.799493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.799507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.808397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.808412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.817294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.817309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.825902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.825918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.834885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.834903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.843303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.843318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.012 [2024-07-15 16:07:54.852242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.012 [2024-07-15 16:07:54.852257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:54.861388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:54.861402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:54.870028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:54.870042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:54.878587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:54.878601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:54.886900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:54.886915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:54.895646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:54.895660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:54.904605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:54.904619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:54.913559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:54.913573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:54.921902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:54.921916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:54.930852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:54.930867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:54.939861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:54.939875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:54.948715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:54.948730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:54.957749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:54.957764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:54.966186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:54.966200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:54.974659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:54.974674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:54.982849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:54.982864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:54.991656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:54.991670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:54.999942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:54.999959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:55.008462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:55.008476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:55.016920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:55.016935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:55.025589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:55.025603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:55.033739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:55.033753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:55.042642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:55.042656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:55.051514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:55.051529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:55.060697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:55.060712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:55.069441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:55.069456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:55.078532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:55.078546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:55.086857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:55.086871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:55.095788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:55.095802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:55.104269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:55.104283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 16:07:55.113330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 16:07:55.113345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.121801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.121816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.130483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.130497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.139421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.139435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.147434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.147448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.156227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.156242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.165268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.165285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.173523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.173537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.181960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.181974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.190593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.190607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.199248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.199262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.207536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.207550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.216480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.216494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.225641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.225655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.234023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.234037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.242774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.242789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.251701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.251714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.260450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.260465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.269479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.269493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.277260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 16:07:55.277275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 16:07:55.286422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 16:07:55.286436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 16:07:55.294901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 16:07:55.294916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 16:07:55.303798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 16:07:55.303812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 16:07:55.312472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 16:07:55.312486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 16:07:55.320655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 16:07:55.320669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 16:07:55.329318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 16:07:55.329333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 16:07:55.337910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 16:07:55.337924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 16:07:55.346887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 16:07:55.346902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 16:07:55.355408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 16:07:55.355422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 16:07:55.364297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 16:07:55.364312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 16:07:55.372496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 16:07:55.372510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.381293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.381307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.389914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.389928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.398579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.398593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.407550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.407564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.415913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.415927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.424658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.424672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.433356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.433370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.441663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.441677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.450584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.450598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.458540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.458554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.467668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.467682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.476158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.476173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.484544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.484558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.493272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.493287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.501828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.501842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.510533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.510547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.519174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.519189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.527546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.527560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.536246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.536260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.544917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.544931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.553574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.553588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.561709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.561723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.570498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.570513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.578970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.578984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.587701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.587715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.596242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.596256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.604980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.604994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.613266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.613279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.622483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.622497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.797 [2024-07-15 16:07:55.631388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.797 [2024-07-15 16:07:55.631403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.640781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.640795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.649155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.649169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.657490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.657504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.665758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.665772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.674528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.674542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.683540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.683554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.692064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.692078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.700979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.700992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.709230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.709244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.717932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.717946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.726710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.726724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.735358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.735373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.743740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.743754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.752098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.752111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.760683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.760697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.769485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.769499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.778372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.778386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.786932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.786946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.795538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.795552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.804074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.804088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.812359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.812377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.820944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.820957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.829687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.829700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.838745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.838759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.847094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.847108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.855461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.855475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.863938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.863952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.872899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.872914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.881382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.881397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.889986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 16:07:55.890001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 16:07:55.898325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.060 [2024-07-15 16:07:55.898339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:55.907409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:55.907424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:55.915482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:55.915495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:55.924370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:55.924383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:55.933271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:55.933285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:55.941849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:55.941863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:55.950559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:55.950574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:55.959476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:55.959490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:55.967925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:55.967939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:55.976669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:55.976686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:55.985346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:55.985360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:55.993783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:55.993797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.002428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.002442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.011148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.011161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.019731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.019745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.028044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.028058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.037153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.037167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.045652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.045666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.054541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.054555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.063128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.063142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.072061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.072076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.080795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.080809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.089438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.089452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.097539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.097553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.106438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.106452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.115475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.115489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.124530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.124544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.132395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.132409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.141185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.141206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.149820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.149834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 16:07:56.158424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 16:07:56.158438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.167055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.167069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.175924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.175938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.184861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.184874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.193858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.193872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.202716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.202730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.211473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.211487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.220629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.220643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.229516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.229529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.238533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.238547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.247058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.247072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.256046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.256060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.264556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.264570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.273584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.273599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.281784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.281798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.290611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.290625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.299052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.299066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.307829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.307846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.316862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.316876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.325308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.325322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.334035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.334049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.342822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.342836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.351155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.351169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.359762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.359776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.368139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.368154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.377067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.377081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.384798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.384812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.393847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.393861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.401691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.401705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.410399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.410412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.580 [2024-07-15 16:07:56.418553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.580 [2024-07-15 16:07:56.418567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.427138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.427152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.435914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.435928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.444822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.444836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.453784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.453798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.462728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.462743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.470796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.470813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.478989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.479003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.487426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.487441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.495981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.495995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.504972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.504986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.513656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.513671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.522285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.522300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.530999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.531014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.539478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.539492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.548111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.548131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.557208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.557222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.566350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.566364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.574768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.574783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.583625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.583639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.591856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.591870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.600642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.600657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.609555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.609570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.618754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.618768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.627707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.627721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.636336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.636350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.645108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.645127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.654130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.654145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.661965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.661980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.670407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.670421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.841 [2024-07-15 16:07:56.679027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.841 [2024-07-15 16:07:56.679042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.687929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.687944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.697058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.697072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.705374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.705388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.714144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.714159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.722792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.722806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.731545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.731559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.740748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.740762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.749220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.749235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.757885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.757899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.766676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.766690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.775411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.775425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.783221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.783235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.792177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.792191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.800613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.800627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.809450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.809464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.818225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.818239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.827062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.827076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.835741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.835755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.844588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.844603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.853172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.853186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.861599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.861613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.101 [2024-07-15 16:07:56.870209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.101 [2024-07-15 16:07:56.870223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.102 [2024-07-15 16:07:56.879005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.102 [2024-07-15 16:07:56.879019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.102 [2024-07-15 16:07:56.888246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.102 [2024-07-15 16:07:56.888261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.102 [2024-07-15 16:07:56.896447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.102 [2024-07-15 16:07:56.896461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.102 [2024-07-15 16:07:56.904792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.102 [2024-07-15 16:07:56.904805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.102 [2024-07-15 16:07:56.913825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.102 [2024-07-15 16:07:56.913839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.102 [2024-07-15 16:07:56.922539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.102 [2024-07-15 16:07:56.922553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.102 [2024-07-15 16:07:56.931069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.102 [2024-07-15 16:07:56.931083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.102 [2024-07-15 16:07:56.940337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.102 [2024-07-15 16:07:56.940352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:56.948681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:56.948695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:56.957253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:56.957267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:56.965896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:56.965910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:56.974229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:56.974243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:56.982917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:56.982931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:56.991439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:56.991453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:56.999908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:56.999922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.009037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.009052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.017359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.017374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.025755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.025769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.034177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.034192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.042759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.042773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.051301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.051315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.060357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.060371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.068847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.068861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.077260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.077274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.085764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.085778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.094053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.094067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.102708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.102723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.111222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.111236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.120217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.120235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.128603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.128617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.137634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.137648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.146159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.146174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.154996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.155010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.164181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.164195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.172923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.172937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.181964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.181979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.189783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.189797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.362 [2024-07-15 16:07:57.198408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.362 [2024-07-15 16:07:57.198421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.207141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.207155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.216250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.216265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.224592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.224605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.233220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.233234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.241549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.241563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.250120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.250137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.258928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.258942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.267630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.267643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.276689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.276702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.285702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.285720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.293899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.293913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.306532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.306545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.314330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.314344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.323015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.323028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.332096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.332109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.341086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.341099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.349421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.349434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.358388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.358402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.366915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.366929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.375569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.375583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.384143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.384157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.392202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.392216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.400461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.400474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.408958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.408971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.418259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.418273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.427272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.427286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.436146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.436160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.444931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.444944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.453296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.453315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.623 [2024-07-15 16:07:57.462052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.623 [2024-07-15 16:07:57.462065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.470751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.470765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.479334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.479348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.487756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.487770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.496771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.496785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.504878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.504892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.513791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.513805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.522441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.522454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.531144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.531158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.539761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.539775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.548269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.548282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.556611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.556625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.564913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.564927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.573802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.573815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.582218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.582232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.590771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.590785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.599494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.599507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.608062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.608075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.884 [2024-07-15 16:07:57.617056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.884 [2024-07-15 16:07:57.617075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.885 [2024-07-15 16:07:57.625542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.885 [2024-07-15 16:07:57.625557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.885 [2024-07-15 16:07:57.634223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.885 [2024-07-15 16:07:57.634237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.885 [2024-07-15 16:07:57.643288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.885 [2024-07-15 16:07:57.643302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.885 [2024-07-15 16:07:57.652116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.885 [2024-07-15 16:07:57.652134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.885 [2024-07-15 16:07:57.660787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.885 [2024-07-15 16:07:57.660801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.885 [2024-07-15 16:07:57.669104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.885 [2024-07-15 16:07:57.669117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.885 [2024-07-15 16:07:57.677931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.885 [2024-07-15 16:07:57.677944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.885 [2024-07-15 16:07:57.686537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.885 [2024-07-15 16:07:57.686550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.885 [2024-07-15 16:07:57.694310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.885 [2024-07-15 16:07:57.694323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.885 [2024-07-15 16:07:57.703463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.885 [2024-07-15 16:07:57.703476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.885 [2024-07-15 16:07:57.711755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.885 [2024-07-15 16:07:57.711769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.885 [2024-07-15 16:07:57.720210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.885 [2024-07-15 16:07:57.720224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.729003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.729017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.737818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.737831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.746535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.746548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.754787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.754800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.763308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.763322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.771433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.771446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.780145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.780163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.788972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.788986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.797836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.797849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.806717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.806731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.815295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.815308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.824081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.824094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.833047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.833061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.841426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.841440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.850502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.850516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.858736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.858750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.867692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.867707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.876610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.876624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.885537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.885551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.894239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.894253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.902913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.902926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.911880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.911893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.920262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.920276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.928837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.928851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.937277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.937291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.946375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.946389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.955137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.955151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.964440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.964454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.972838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.972852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.145 [2024-07-15 16:07:57.981642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.145 [2024-07-15 16:07:57.981656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:57.989956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:57.989970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:57.998692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:57.998706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.007588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.007602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.016164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.016177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.024583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.024598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.033650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.033665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.042245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.042259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.050882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.050896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.059382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.059395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.068192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.068205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.077323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.077337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.085778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.085792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.094437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.094451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.103574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.103588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.112101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.112114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.120169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.120182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.128442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.128455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.136704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.136717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.145516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.145530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.153977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.153991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.162511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.162524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.171470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.171484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.180590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.180605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.189597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.189612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.198323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.198338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.206458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.206472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.215137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.215151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.223874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.223888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.232608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.232623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.406 [2024-07-15 16:07:58.241369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.406 [2024-07-15 16:07:58.241383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.725 [2024-07-15 16:07:58.250694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.725 [2024-07-15 16:07:58.250708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.725 [2024-07-15 16:07:58.258993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.725 [2024-07-15 16:07:58.259007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.725 [2024-07-15 16:07:58.267977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.725 [2024-07-15 16:07:58.267992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.725 [2024-07-15 16:07:58.276363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.725 [2024-07-15 16:07:58.276377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.725 [2024-07-15 16:07:58.285614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.725 [2024-07-15 16:07:58.285628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.725 [2024-07-15 16:07:58.294103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.725 [2024-07-15 16:07:58.294118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.725 [2024-07-15 16:07:58.302205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.725 [2024-07-15 16:07:58.302219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.725 [2024-07-15 16:07:58.310790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.725 [2024-07-15 16:07:58.310804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.725 [2024-07-15 16:07:58.319538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.725 [2024-07-15 16:07:58.319552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.725 [2024-07-15 16:07:58.328149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.725 [2024-07-15 16:07:58.328163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.725 [2024-07-15 16:07:58.337050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.725 [2024-07-15 16:07:58.337064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.725 [2024-07-15 16:07:58.346016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.725 [2024-07-15 16:07:58.346030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.354910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.354924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.363380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.363394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.372037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.372052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.380525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.380539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.389408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.389423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.398310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.398324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.407304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.407318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.415847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.415861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.424405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.424419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.432928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.432946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.442126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.442141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.450597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.450611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.459560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.459574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.468507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.468522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.476873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.476888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.485959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.485972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.494502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.494516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.503548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.503562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.512506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.512521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.520955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.520969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.529671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.529685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.538667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.538682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.546716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.546730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.555496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.555510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.726 [2024-07-15 16:07:58.564158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.726 [2024-07-15 16:07:58.564172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.986 [2024-07-15 16:07:58.573084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.986 [2024-07-15 16:07:58.573099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.986 [2024-07-15 16:07:58.581740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.986 [2024-07-15 16:07:58.581754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.986 [2024-07-15 16:07:58.590241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.986 [2024-07-15 16:07:58.590255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.986 [2024-07-15 16:07:58.598641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.986 [2024-07-15 16:07:58.598661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.986 [2024-07-15 16:07:58.607355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.986 [2024-07-15 16:07:58.607369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.986 [2024-07-15 16:07:58.615956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.986 [2024-07-15 16:07:58.615971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.986 [2024-07-15 16:07:58.624694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.986 [2024-07-15 16:07:58.624709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.986 [2024-07-15 16:07:58.633624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.986 [2024-07-15 16:07:58.633638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.986 [2024-07-15 16:07:58.641412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.986 [2024-07-15 16:07:58.641426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.986 [2024-07-15 16:07:58.650669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.986 [2024-07-15 16:07:58.650683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.986 [2024-07-15 16:07:58.659186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.986 [2024-07-15 16:07:58.659200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.986 [2024-07-15 16:07:58.667780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.986 [2024-07-15 16:07:58.667794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.986 [2024-07-15 16:07:58.676638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.986 [2024-07-15 16:07:58.676652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.986 [2024-07-15 16:07:58.684969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.986 [2024-07-15 16:07:58.684983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.986 [2024-07-15 16:07:58.694031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.986 [2024-07-15 16:07:58.694045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.986 [2024-07-15 16:07:58.702906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.986 [2024-07-15 16:07:58.702920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.986 [2024-07-15 16:07:58.711310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.986 [2024-07-15 16:07:58.711324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.987 [2024-07-15 16:07:58.720248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.987 [2024-07-15 16:07:58.720262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.987 [2024-07-15 16:07:58.728593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.987 [2024-07-15 16:07:58.728607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.987 [2024-07-15 16:07:58.737081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.987 [2024-07-15 16:07:58.737095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.987 00:16:22.987 Latency(us) 00:16:22.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.987 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:22.987 Nvme1n1 : 5.01 19400.76 151.57 0.00 0.00 6590.37 2457.60 14964.05 00:16:22.987 =================================================================================================================== 00:16:22.987 Total : 19400.76 151.57 0.00 0.00 6590.37 2457.60 14964.05 00:16:22.987 [2024-07-15 16:07:58.743184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.987 [2024-07-15 16:07:58.743198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.987 [2024-07-15 16:07:58.751204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.987 [2024-07-15 16:07:58.751216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.987 [2024-07-15 16:07:58.759223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.987 [2024-07-15 16:07:58.759234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.987 [2024-07-15 16:07:58.767246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.987 [2024-07-15 16:07:58.767256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.987 [2024-07-15 16:07:58.775275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.987 [2024-07-15 16:07:58.775285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.987 [2024-07-15 16:07:58.783285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.987 [2024-07-15 16:07:58.783295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.987 [2024-07-15 16:07:58.791304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.987 [2024-07-15 16:07:58.791313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.987 [2024-07-15 16:07:58.799323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.987 [2024-07-15 16:07:58.799332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.987 [2024-07-15 16:07:58.807344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.987 [2024-07-15 16:07:58.807353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.987 [2024-07-15 16:07:58.815363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.987 [2024-07-15 16:07:58.815370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.987 [2024-07-15 16:07:58.823384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.987 [2024-07-15 16:07:58.823391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.247 [2024-07-15 16:07:58.831406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.247 [2024-07-15 16:07:58.831415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.247 [2024-07-15 16:07:58.839426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.247 [2024-07-15 16:07:58.839434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.247 [2024-07-15 16:07:58.847445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.247 [2024-07-15 16:07:58.847452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.247 [2024-07-15 16:07:58.855469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.247 [2024-07-15 16:07:58.855480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.247 [2024-07-15 16:07:58.863487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.247 [2024-07-15 16:07:58.863494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.247 [2024-07-15 16:07:58.871507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.247 [2024-07-15 16:07:58.871515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2255338) - No such process 00:16:23.247 16:07:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2255338 00:16:23.247 16:07:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:23.247 16:07:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.247 16:07:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:23.247 16:07:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.247 16:07:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:23.247 16:07:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.247 16:07:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:23.247 delay0 00:16:23.247 16:07:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.247 16:07:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:23.247 16:07:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.247 16:07:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:23.247 16:07:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.247 16:07:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:23.247 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.247 [2024-07-15 16:07:58.960228] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:29.862 Initializing NVMe Controllers 00:16:29.862 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:29.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:29.862 Initialization complete. Launching workers. 00:16:29.862 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 70 00:16:29.862 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 357, failed to submit 33 00:16:29.862 success 97, unsuccess 260, failed 0 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:29.862 rmmod nvme_tcp 00:16:29.862 rmmod nvme_fabrics 00:16:29.862 rmmod nvme_keyring 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2253152 ']' 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2253152 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2253152 ']' 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2253152 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2253152 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2253152' 00:16:29.862 killing process with pid 2253152 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2253152 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2253152 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.862 16:08:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.777 16:08:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:31.777 00:16:31.777 real 0m33.095s 00:16:31.777 user 0m44.672s 00:16:31.777 sys 0m10.576s 00:16:31.777 16:08:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:31.777 16:08:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:31.777 ************************************ 00:16:31.777 END TEST nvmf_zcopy 00:16:31.777 ************************************ 00:16:31.777 16:08:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:31.777 16:08:07 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:31.777 16:08:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:31.777 16:08:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:31.777 16:08:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:31.777 ************************************ 00:16:31.777 START TEST nvmf_nmic 00:16:31.777 ************************************ 00:16:31.777 16:08:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:32.038 * Looking for test storage... 00:16:32.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.038 16:08:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:32.039 16:08:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:38.627 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:38.627 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:38.627 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:38.627 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:38.627 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:38.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:16:38.888 00:16:38.888 --- 10.0.0.2 ping statistics --- 00:16:38.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.888 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:38.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:16:38.888 00:16:38.888 --- 10.0.0.1 ping statistics --- 00:16:38.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.888 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2261843 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2261843 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2261843 ']' 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.888 16:08:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:39.149 [2024-07-15 16:08:14.756958] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:16:39.149 [2024-07-15 16:08:14.757019] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.149 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.149 [2024-07-15 16:08:14.826997] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.149 [2024-07-15 16:08:14.893838] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.149 [2024-07-15 16:08:14.893873] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.149 [2024-07-15 16:08:14.893881] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.149 [2024-07-15 16:08:14.893887] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.149 [2024-07-15 16:08:14.893893] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.149 [2024-07-15 16:08:14.894026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.149 [2024-07-15 16:08:14.894049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.149 [2024-07-15 16:08:14.894209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.149 [2024-07-15 16:08:14.894355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.722 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:39.722 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:39.722 16:08:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:39.722 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:39.722 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:39.983 16:08:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.983 16:08:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:39.983 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.983 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:39.983 [2024-07-15 16:08:15.575813] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.983 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.983 16:08:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:39.983 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.983 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:39.983 Malloc0 00:16:39.983 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.983 16:08:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:39.983 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.983 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:39.983 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:39.984 [2024-07-15 16:08:15.635033] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:39.984 test case1: single bdev can't be used in multiple subsystems 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:39.984 [2024-07-15 16:08:15.671027] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:39.984 [2024-07-15 16:08:15.671048] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:39.984 [2024-07-15 16:08:15.671055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.984 request: 00:16:39.984 { 00:16:39.984 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:39.984 "namespace": { 00:16:39.984 "bdev_name": "Malloc0", 00:16:39.984 "no_auto_visible": false 00:16:39.984 }, 00:16:39.984 "method": "nvmf_subsystem_add_ns", 00:16:39.984 "req_id": 1 00:16:39.984 } 00:16:39.984 Got JSON-RPC error response 00:16:39.984 response: 00:16:39.984 { 00:16:39.984 "code": -32602, 00:16:39.984 "message": "Invalid parameters" 00:16:39.984 } 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:39.984 Adding namespace failed - expected result. 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:39.984 test case2: host connect to nvmf target in multiple paths 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:39.984 [2024-07-15 16:08:15.683180] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.984 16:08:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:41.367 16:08:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:43.276 16:08:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:43.276 16:08:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:43.276 16:08:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.276 16:08:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:43.276 16:08:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:45.200 16:08:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:45.200 16:08:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:45.200 16:08:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:45.200 16:08:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:45.200 16:08:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:45.200 16:08:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:45.200 16:08:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:45.200 [global] 00:16:45.200 thread=1 00:16:45.200 invalidate=1 00:16:45.200 rw=write 00:16:45.200 time_based=1 00:16:45.200 runtime=1 00:16:45.200 ioengine=libaio 00:16:45.200 direct=1 00:16:45.200 bs=4096 00:16:45.200 iodepth=1 00:16:45.200 norandommap=0 00:16:45.200 numjobs=1 00:16:45.200 00:16:45.200 verify_dump=1 00:16:45.200 verify_backlog=512 00:16:45.200 verify_state_save=0 00:16:45.200 do_verify=1 00:16:45.200 verify=crc32c-intel 00:16:45.200 [job0] 00:16:45.200 filename=/dev/nvme0n1 00:16:45.200 Could not set queue depth (nvme0n1) 00:16:45.460 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:45.460 fio-3.35 00:16:45.460 Starting 1 thread 00:16:46.403 00:16:46.403 job0: (groupid=0, jobs=1): err= 0: pid=2263319: Mon Jul 15 16:08:22 2024 00:16:46.403 read: IOPS=13, BW=55.3KiB/s (56.6kB/s)(56.0KiB/1013msec) 00:16:46.403 slat (nsec): min=24322, max=29547, avg=24962.71, stdev=1330.83 00:16:46.403 clat (usec): min=41449, max=42007, avg=41930.92, stdev=141.82 00:16:46.403 lat (usec): min=41473, max=42034, avg=41955.88, stdev=142.11 00:16:46.403 clat percentiles (usec): 00:16:46.403 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:16:46.403 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:46.403 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:46.403 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:46.403 | 99.99th=[42206] 00:16:46.403 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:16:46.403 slat (usec): min=9, max=29051, avg=86.45, stdev=1282.64 00:16:46.403 clat (usec): min=388, max=1021, avg=736.73, stdev=85.26 00:16:46.403 lat (usec): min=399, max=29871, avg=823.17, stdev=1289.30 00:16:46.403 clat percentiles (usec): 00:16:46.403 | 1.00th=[ 502], 5.00th=[ 594], 10.00th=[ 627], 20.00th=[ 668], 00:16:46.403 | 30.00th=[ 701], 40.00th=[ 717], 50.00th=[ 734], 60.00th=[ 758], 00:16:46.403 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 832], 95.00th=[ 857], 00:16:46.403 | 99.00th=[ 881], 99.50th=[ 922], 99.90th=[ 1020], 99.95th=[ 1020], 00:16:46.403 | 99.99th=[ 1020] 00:16:46.403 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:46.403 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:46.403 lat (usec) : 500=0.95%, 750=53.99%, 1000=42.21% 00:16:46.403 lat (msec) : 2=0.19%, 50=2.66% 00:16:46.403 cpu : usr=0.99%, sys=1.19%, ctx=530, majf=0, minf=1 00:16:46.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:46.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.403 issued rwts: total=14,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:46.403 00:16:46.403 Run status group 0 (all jobs): 00:16:46.403 READ: bw=55.3KiB/s (56.6kB/s), 55.3KiB/s-55.3KiB/s (56.6kB/s-56.6kB/s), io=56.0KiB (57.3kB), run=1013-1013msec 00:16:46.403 WRITE: bw=2022KiB/s (2070kB/s), 2022KiB/s-2022KiB/s (2070kB/s-2070kB/s), io=2048KiB (2097kB), run=1013-1013msec 00:16:46.403 00:16:46.403 Disk stats (read/write): 00:16:46.403 nvme0n1: ios=36/512, merge=0/0, ticks=1429/365, in_queue=1794, util=99.00% 00:16:46.403 16:08:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:46.664 rmmod nvme_tcp 00:16:46.664 rmmod nvme_fabrics 00:16:46.664 rmmod nvme_keyring 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2261843 ']' 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2261843 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2261843 ']' 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2261843 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:46.664 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2261843 00:16:46.926 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:46.926 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:46.926 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2261843' 00:16:46.926 killing process with pid 2261843 00:16:46.926 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2261843 00:16:46.926 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2261843 00:16:46.926 16:08:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:46.926 16:08:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:46.926 16:08:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:46.926 16:08:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:46.926 16:08:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:46.926 16:08:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.926 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.926 16:08:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.524 16:08:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:49.524 00:16:49.524 real 0m17.227s 00:16:49.524 user 0m45.698s 00:16:49.524 sys 0m5.919s 00:16:49.524 16:08:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:49.524 16:08:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:49.524 ************************************ 00:16:49.524 END TEST nvmf_nmic 00:16:49.524 ************************************ 00:16:49.524 16:08:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:49.524 16:08:24 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:49.524 16:08:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:49.524 16:08:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:49.524 16:08:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:49.524 ************************************ 00:16:49.524 START TEST nvmf_fio_target 00:16:49.524 ************************************ 00:16:49.524 16:08:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:49.524 * Looking for test storage... 00:16:49.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.524 16:08:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:49.525 16:08:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.119 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:56.119 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:56.119 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:56.119 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:56.119 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:56.119 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:56.119 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:56.119 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:56.119 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:56.119 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:56.119 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:56.119 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:56.119 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:56.119 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:56.120 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:56.120 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:56.120 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:56.120 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:56.120 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:56.382 16:08:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:56.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:16:56.382 00:16:56.382 --- 10.0.0.2 ping statistics --- 00:16:56.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.382 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:56.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:16:56.382 00:16:56.382 --- 10.0.0.1 ping statistics --- 00:16:56.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.382 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2267714 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2267714 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2267714 ']' 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.382 16:08:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.382 [2024-07-15 16:08:32.196036] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:16:56.382 [2024-07-15 16:08:32.196092] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.643 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.643 [2024-07-15 16:08:32.260061] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:56.643 [2024-07-15 16:08:32.325611] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.643 [2024-07-15 16:08:32.325643] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.643 [2024-07-15 16:08:32.325653] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.643 [2024-07-15 16:08:32.325660] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.643 [2024-07-15 16:08:32.325665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.643 [2024-07-15 16:08:32.325804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.643 [2024-07-15 16:08:32.325908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.643 [2024-07-15 16:08:32.326063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.643 [2024-07-15 16:08:32.326064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.215 16:08:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.215 16:08:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:16:57.215 16:08:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:57.215 16:08:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:57.215 16:08:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.215 16:08:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.215 16:08:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:57.488 [2024-07-15 16:08:33.163315] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.488 16:08:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:57.754 16:08:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:57.754 16:08:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:57.754 16:08:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:57.754 16:08:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:58.014 16:08:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:58.014 16:08:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:58.274 16:08:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:58.274 16:08:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:58.274 16:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:58.533 16:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:58.533 16:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:58.793 16:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:58.793 16:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:58.793 16:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:58.793 16:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:59.053 16:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:59.319 16:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:59.319 16:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:59.319 16:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:59.319 16:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:59.585 16:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.585 [2024-07-15 16:08:35.420176] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.846 16:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:59.846 16:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:00.106 16:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:02.018 16:08:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:02.018 16:08:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:02.018 16:08:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:02.018 16:08:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:02.018 16:08:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:02.018 16:08:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:03.931 16:08:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:03.931 16:08:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:03.931 16:08:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:03.931 16:08:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:03.931 16:08:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:03.931 16:08:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:03.931 16:08:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:03.931 [global] 00:17:03.931 thread=1 00:17:03.931 invalidate=1 00:17:03.931 rw=write 00:17:03.931 time_based=1 00:17:03.931 runtime=1 00:17:03.931 ioengine=libaio 00:17:03.931 direct=1 00:17:03.931 bs=4096 00:17:03.931 iodepth=1 00:17:03.931 norandommap=0 00:17:03.931 numjobs=1 00:17:03.931 00:17:03.931 verify_dump=1 00:17:03.931 verify_backlog=512 00:17:03.931 verify_state_save=0 00:17:03.931 do_verify=1 00:17:03.931 verify=crc32c-intel 00:17:03.931 [job0] 00:17:03.931 filename=/dev/nvme0n1 00:17:03.931 [job1] 00:17:03.931 filename=/dev/nvme0n2 00:17:03.931 [job2] 00:17:03.931 filename=/dev/nvme0n3 00:17:03.931 [job3] 00:17:03.931 filename=/dev/nvme0n4 00:17:03.931 Could not set queue depth (nvme0n1) 00:17:03.931 Could not set queue depth (nvme0n2) 00:17:03.931 Could not set queue depth (nvme0n3) 00:17:03.931 Could not set queue depth (nvme0n4) 00:17:04.191 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:04.191 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:04.191 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:04.191 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:04.191 fio-3.35 00:17:04.191 Starting 4 threads 00:17:05.573 00:17:05.573 job0: (groupid=0, jobs=1): err= 0: pid=2269341: Mon Jul 15 16:08:41 2024 00:17:05.573 read: IOPS=398, BW=1594KiB/s (1633kB/s)(1596KiB/1001msec) 00:17:05.573 slat (nsec): min=8317, max=59033, avg=24951.64, stdev=3476.46 00:17:05.573 clat (usec): min=980, max=41989, avg=1392.78, stdev=2039.94 00:17:05.573 lat (usec): min=1008, max=41997, avg=1417.73, stdev=2039.10 00:17:05.573 clat percentiles (usec): 00:17:05.573 | 1.00th=[ 1045], 5.00th=[ 1123], 10.00th=[ 1172], 20.00th=[ 1221], 00:17:05.573 | 30.00th=[ 1254], 40.00th=[ 1270], 50.00th=[ 1287], 60.00th=[ 1319], 00:17:05.573 | 70.00th=[ 1319], 80.00th=[ 1352], 90.00th=[ 1401], 95.00th=[ 1467], 00:17:05.573 | 99.00th=[ 1680], 99.50th=[ 1713], 99.90th=[42206], 99.95th=[42206], 00:17:05.573 | 99.99th=[42206] 00:17:05.573 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:05.573 slat (nsec): min=10264, max=59025, avg=30380.18, stdev=8493.80 00:17:05.573 clat (usec): min=395, max=4007, avg=803.01, stdev=228.48 00:17:05.573 lat (usec): min=428, max=4040, avg=833.39, stdev=229.67 00:17:05.573 clat percentiles (usec): 00:17:05.573 | 1.00th=[ 461], 5.00th=[ 545], 10.00th=[ 603], 20.00th=[ 652], 00:17:05.573 | 30.00th=[ 701], 40.00th=[ 750], 50.00th=[ 791], 60.00th=[ 832], 00:17:05.573 | 70.00th=[ 873], 80.00th=[ 914], 90.00th=[ 971], 95.00th=[ 1045], 00:17:05.573 | 99.00th=[ 1401], 99.50th=[ 1942], 99.90th=[ 4015], 99.95th=[ 4015], 00:17:05.574 | 99.99th=[ 4015] 00:17:05.574 bw ( KiB/s): min= 4096, max= 4096, per=50.20%, avg=4096.00, stdev= 0.00, samples=1 00:17:05.574 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:05.574 lat (usec) : 500=0.77%, 750=21.84%, 1000=29.86% 00:17:05.574 lat (msec) : 2=47.20%, 4=0.11%, 10=0.11%, 50=0.11% 00:17:05.574 cpu : usr=1.60%, sys=2.40%, ctx=915, majf=0, minf=1 00:17:05.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.574 issued rwts: total=399,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.574 job1: (groupid=0, jobs=1): err= 0: pid=2269348: Mon Jul 15 16:08:41 2024 00:17:05.574 read: IOPS=86, BW=348KiB/s (356kB/s)(348KiB/1001msec) 00:17:05.574 slat (nsec): min=24800, max=44429, avg=26541.25, stdev=2936.08 00:17:05.574 clat (usec): min=606, max=42960, avg=6266.61, stdev=13647.14 00:17:05.574 lat (usec): min=632, max=42985, avg=6293.15, stdev=13646.70 00:17:05.574 clat percentiles (usec): 00:17:05.574 | 1.00th=[ 603], 5.00th=[ 898], 10.00th=[ 971], 20.00th=[ 1037], 00:17:05.574 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1139], 60.00th=[ 1188], 00:17:05.574 | 70.00th=[ 1205], 80.00th=[ 1254], 90.00th=[42206], 95.00th=[42206], 00:17:05.574 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:05.574 | 99.99th=[42730] 00:17:05.574 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:05.574 slat (usec): min=10, max=44125, avg=119.28, stdev=1948.63 00:17:05.574 clat (usec): min=209, max=1139, avg=753.83, stdev=141.85 00:17:05.574 lat (usec): min=224, max=44881, avg=873.11, stdev=1953.93 00:17:05.574 clat percentiles (usec): 00:17:05.574 | 1.00th=[ 330], 5.00th=[ 519], 10.00th=[ 570], 20.00th=[ 635], 00:17:05.574 | 30.00th=[ 685], 40.00th=[ 725], 50.00th=[ 758], 60.00th=[ 799], 00:17:05.574 | 70.00th=[ 840], 80.00th=[ 881], 90.00th=[ 922], 95.00th=[ 963], 00:17:05.574 | 99.00th=[ 1012], 99.50th=[ 1057], 99.90th=[ 1139], 99.95th=[ 1139], 00:17:05.574 | 99.99th=[ 1139] 00:17:05.574 bw ( KiB/s): min= 4096, max= 4096, per=50.20%, avg=4096.00, stdev= 0.00, samples=1 00:17:05.574 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:05.574 lat (usec) : 250=0.17%, 500=2.84%, 750=36.73%, 1000=46.58% 00:17:05.574 lat (msec) : 2=11.85%, 50=1.84% 00:17:05.574 cpu : usr=1.40%, sys=2.00%, ctx=602, majf=0, minf=1 00:17:05.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.574 issued rwts: total=87,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.574 job2: (groupid=0, jobs=1): err= 0: pid=2269361: Mon Jul 15 16:08:41 2024 00:17:05.574 read: IOPS=13, BW=55.8KiB/s (57.1kB/s)(56.0KiB/1004msec) 00:17:05.574 slat (nsec): min=26595, max=31403, avg=27155.79, stdev=1232.37 00:17:05.574 clat (usec): min=1037, max=45206, avg=39426.89, stdev=11084.64 00:17:05.574 lat (usec): min=1064, max=45238, avg=39454.04, stdev=11084.78 00:17:05.574 clat percentiles (usec): 00:17:05.574 | 1.00th=[ 1037], 5.00th=[ 1037], 10.00th=[41681], 20.00th=[41681], 00:17:05.574 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:05.574 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[45351], 00:17:05.574 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:17:05.574 | 99.99th=[45351] 00:17:05.574 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:17:05.574 slat (nsec): min=9517, max=57052, avg=34035.31, stdev=7084.23 00:17:05.574 clat (usec): min=394, max=1350, avg=837.63, stdev=113.73 00:17:05.574 lat (usec): min=406, max=1386, avg=871.66, stdev=116.04 00:17:05.574 clat percentiles (usec): 00:17:05.574 | 1.00th=[ 553], 5.00th=[ 668], 10.00th=[ 685], 20.00th=[ 742], 00:17:05.574 | 30.00th=[ 783], 40.00th=[ 816], 50.00th=[ 840], 60.00th=[ 873], 00:17:05.574 | 70.00th=[ 906], 80.00th=[ 938], 90.00th=[ 979], 95.00th=[ 1004], 00:17:05.574 | 99.00th=[ 1057], 99.50th=[ 1139], 99.90th=[ 1352], 99.95th=[ 1352], 00:17:05.574 | 99.99th=[ 1352] 00:17:05.574 bw ( KiB/s): min= 4096, max= 4096, per=50.20%, avg=4096.00, stdev= 0.00, samples=1 00:17:05.574 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:05.574 lat (usec) : 500=0.57%, 750=21.10%, 1000=70.72% 00:17:05.574 lat (msec) : 2=5.13%, 50=2.47% 00:17:05.574 cpu : usr=1.30%, sys=1.99%, ctx=528, majf=0, minf=1 00:17:05.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.574 issued rwts: total=14,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.574 job3: (groupid=0, jobs=1): err= 0: pid=2269368: Mon Jul 15 16:08:41 2024 00:17:05.574 read: IOPS=424, BW=1698KiB/s (1739kB/s)(1700KiB/1001msec) 00:17:05.574 slat (nsec): min=26318, max=61775, avg=27115.36, stdev=2715.17 00:17:05.574 clat (usec): min=962, max=1536, avg=1278.10, stdev=70.05 00:17:05.574 lat (usec): min=989, max=1563, avg=1305.21, stdev=70.07 00:17:05.574 clat percentiles (usec): 00:17:05.574 | 1.00th=[ 1090], 5.00th=[ 1156], 10.00th=[ 1205], 20.00th=[ 1237], 00:17:05.574 | 30.00th=[ 1254], 40.00th=[ 1270], 50.00th=[ 1287], 60.00th=[ 1287], 00:17:05.574 | 70.00th=[ 1303], 80.00th=[ 1336], 90.00th=[ 1352], 95.00th=[ 1385], 00:17:05.574 | 99.00th=[ 1467], 99.50th=[ 1483], 99.90th=[ 1532], 99.95th=[ 1532], 00:17:05.574 | 99.99th=[ 1532] 00:17:05.574 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:05.574 slat (usec): min=10, max=1829, avg=39.85, stdev=81.71 00:17:05.574 clat (usec): min=382, max=1281, avg=812.27, stdev=124.67 00:17:05.574 lat (usec): min=394, max=2910, avg=852.12, stdev=156.95 00:17:05.574 clat percentiles (usec): 00:17:05.574 | 1.00th=[ 486], 5.00th=[ 562], 10.00th=[ 644], 20.00th=[ 725], 00:17:05.574 | 30.00th=[ 758], 40.00th=[ 791], 50.00th=[ 824], 60.00th=[ 848], 00:17:05.574 | 70.00th=[ 889], 80.00th=[ 922], 90.00th=[ 955], 95.00th=[ 988], 00:17:05.574 | 99.00th=[ 1037], 99.50th=[ 1074], 99.90th=[ 1287], 99.95th=[ 1287], 00:17:05.574 | 99.99th=[ 1287] 00:17:05.574 bw ( KiB/s): min= 4096, max= 4096, per=50.20%, avg=4096.00, stdev= 0.00, samples=1 00:17:05.574 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:05.574 lat (usec) : 500=0.75%, 750=13.98%, 1000=38.53% 00:17:05.574 lat (msec) : 2=46.74% 00:17:05.574 cpu : usr=2.40%, sys=3.60%, ctx=941, majf=0, minf=1 00:17:05.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.574 issued rwts: total=425,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.574 00:17:05.574 Run status group 0 (all jobs): 00:17:05.574 READ: bw=3685KiB/s (3774kB/s), 55.8KiB/s-1698KiB/s (57.1kB/s-1739kB/s), io=3700KiB (3789kB), run=1001-1004msec 00:17:05.574 WRITE: bw=8159KiB/s (8355kB/s), 2040KiB/s-2046KiB/s (2089kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1004msec 00:17:05.574 00:17:05.574 Disk stats (read/write): 00:17:05.574 nvme0n1: ios=330/512, merge=0/0, ticks=1221/385, in_queue=1606, util=84.07% 00:17:05.574 nvme0n2: ios=51/512, merge=0/0, ticks=640/325, in_queue=965, util=90.71% 00:17:05.574 nvme0n3: ios=73/512, merge=0/0, ticks=483/332, in_queue=815, util=95.14% 00:17:05.574 nvme0n4: ios=362/512, merge=0/0, ticks=517/323, in_queue=840, util=96.90% 00:17:05.574 16:08:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:05.574 [global] 00:17:05.574 thread=1 00:17:05.574 invalidate=1 00:17:05.574 rw=randwrite 00:17:05.574 time_based=1 00:17:05.574 runtime=1 00:17:05.574 ioengine=libaio 00:17:05.574 direct=1 00:17:05.574 bs=4096 00:17:05.574 iodepth=1 00:17:05.574 norandommap=0 00:17:05.574 numjobs=1 00:17:05.574 00:17:05.574 verify_dump=1 00:17:05.574 verify_backlog=512 00:17:05.574 verify_state_save=0 00:17:05.574 do_verify=1 00:17:05.574 verify=crc32c-intel 00:17:05.574 [job0] 00:17:05.574 filename=/dev/nvme0n1 00:17:05.574 [job1] 00:17:05.574 filename=/dev/nvme0n2 00:17:05.574 [job2] 00:17:05.574 filename=/dev/nvme0n3 00:17:05.574 [job3] 00:17:05.574 filename=/dev/nvme0n4 00:17:05.574 Could not set queue depth (nvme0n1) 00:17:05.574 Could not set queue depth (nvme0n2) 00:17:05.574 Could not set queue depth (nvme0n3) 00:17:05.574 Could not set queue depth (nvme0n4) 00:17:05.835 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:05.835 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:05.835 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:05.835 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:05.835 fio-3.35 00:17:05.835 Starting 4 threads 00:17:07.237 00:17:07.237 job0: (groupid=0, jobs=1): err= 0: pid=2269838: Mon Jul 15 16:08:42 2024 00:17:07.237 read: IOPS=13, BW=54.1KiB/s (55.4kB/s)(56.0KiB/1036msec) 00:17:07.237 slat (nsec): min=25365, max=25959, avg=25701.86, stdev=157.23 00:17:07.237 clat (usec): min=41915, max=42931, avg=42093.39, stdev=320.58 00:17:07.237 lat (usec): min=41941, max=42957, avg=42119.09, stdev=320.67 00:17:07.237 clat percentiles (usec): 00:17:07.237 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:07.237 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:07.237 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:17:07.237 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:07.237 | 99.99th=[42730] 00:17:07.237 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:17:07.237 slat (nsec): min=8998, max=86538, avg=31491.42, stdev=7277.84 00:17:07.237 clat (usec): min=544, max=1059, avg=831.78, stdev=86.04 00:17:07.237 lat (usec): min=556, max=1090, avg=863.27, stdev=88.35 00:17:07.237 clat percentiles (usec): 00:17:07.237 | 1.00th=[ 603], 5.00th=[ 693], 10.00th=[ 725], 20.00th=[ 758], 00:17:07.237 | 30.00th=[ 783], 40.00th=[ 816], 50.00th=[ 840], 60.00th=[ 865], 00:17:07.237 | 70.00th=[ 889], 80.00th=[ 906], 90.00th=[ 938], 95.00th=[ 955], 00:17:07.237 | 99.00th=[ 996], 99.50th=[ 1012], 99.90th=[ 1057], 99.95th=[ 1057], 00:17:07.237 | 99.99th=[ 1057] 00:17:07.237 bw ( KiB/s): min= 4087, max= 4087, per=47.09%, avg=4087.00, stdev= 0.00, samples=1 00:17:07.237 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:17:07.237 lat (usec) : 750=17.30%, 1000=79.28% 00:17:07.237 lat (msec) : 2=0.76%, 50=2.66% 00:17:07.237 cpu : usr=0.87%, sys=2.22%, ctx=527, majf=0, minf=1 00:17:07.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.237 issued rwts: total=14,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.237 job1: (groupid=0, jobs=1): err= 0: pid=2269842: Mon Jul 15 16:08:42 2024 00:17:07.237 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:07.237 slat (nsec): min=25861, max=61476, avg=26911.70, stdev=3253.32 00:17:07.237 clat (usec): min=653, max=1114, avg=939.61, stdev=57.72 00:17:07.237 lat (usec): min=679, max=1140, avg=966.52, stdev=57.86 00:17:07.237 clat percentiles (usec): 00:17:07.237 | 1.00th=[ 734], 5.00th=[ 832], 10.00th=[ 881], 20.00th=[ 914], 00:17:07.237 | 30.00th=[ 922], 40.00th=[ 938], 50.00th=[ 947], 60.00th=[ 963], 00:17:07.237 | 70.00th=[ 963], 80.00th=[ 979], 90.00th=[ 996], 95.00th=[ 1012], 00:17:07.237 | 99.00th=[ 1037], 99.50th=[ 1057], 99.90th=[ 1123], 99.95th=[ 1123], 00:17:07.237 | 99.99th=[ 1123] 00:17:07.237 write: IOPS=711, BW=2845KiB/s (2913kB/s)(2848KiB/1001msec); 0 zone resets 00:17:07.237 slat (nsec): min=8423, max=69157, avg=29236.94, stdev=9884.60 00:17:07.237 clat (usec): min=287, max=991, avg=666.45, stdev=133.37 00:17:07.237 lat (usec): min=296, max=1024, avg=695.69, stdev=137.07 00:17:07.237 clat percentiles (usec): 00:17:07.237 | 1.00th=[ 445], 5.00th=[ 502], 10.00th=[ 519], 20.00th=[ 562], 00:17:07.238 | 30.00th=[ 594], 40.00th=[ 611], 50.00th=[ 627], 60.00th=[ 644], 00:17:07.238 | 70.00th=[ 693], 80.00th=[ 807], 90.00th=[ 881], 95.00th=[ 922], 00:17:07.238 | 99.00th=[ 971], 99.50th=[ 979], 99.90th=[ 996], 99.95th=[ 996], 00:17:07.238 | 99.99th=[ 996] 00:17:07.238 bw ( KiB/s): min= 4096, max= 4096, per=47.19%, avg=4096.00, stdev= 0.00, samples=1 00:17:07.238 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:07.238 lat (usec) : 500=2.70%, 750=41.50%, 1000=51.88% 00:17:07.238 lat (msec) : 2=3.92% 00:17:07.238 cpu : usr=2.40%, sys=4.70%, ctx=1226, majf=0, minf=1 00:17:07.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.238 issued rwts: total=512,712,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.238 job2: (groupid=0, jobs=1): err= 0: pid=2269858: Mon Jul 15 16:08:42 2024 00:17:07.238 read: IOPS=15, BW=63.2KiB/s (64.7kB/s)(64.0KiB/1013msec) 00:17:07.238 slat (nsec): min=9487, max=38957, avg=27338.69, stdev=6264.59 00:17:07.238 clat (usec): min=41262, max=43146, avg=42693.13, stdev=492.09 00:17:07.238 lat (usec): min=41289, max=43173, avg=42720.47, stdev=492.54 00:17:07.238 clat percentiles (usec): 00:17:07.238 | 1.00th=[41157], 5.00th=[41157], 10.00th=[42206], 20.00th=[42730], 00:17:07.238 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:17:07.238 | 70.00th=[42730], 80.00th=[43254], 90.00th=[43254], 95.00th=[43254], 00:17:07.238 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:07.238 | 99.99th=[43254] 00:17:07.238 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:17:07.238 slat (nsec): min=8813, max=58751, avg=19081.34, stdev=11815.76 00:17:07.238 clat (usec): min=249, max=1184, avg=618.81, stdev=188.05 00:17:07.238 lat (usec): min=258, max=1216, avg=637.89, stdev=197.95 00:17:07.238 clat percentiles (usec): 00:17:07.238 | 1.00th=[ 330], 5.00th=[ 383], 10.00th=[ 404], 20.00th=[ 449], 00:17:07.238 | 30.00th=[ 502], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 603], 00:17:07.238 | 70.00th=[ 734], 80.00th=[ 832], 90.00th=[ 898], 95.00th=[ 930], 00:17:07.238 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[ 1188], 99.95th=[ 1188], 00:17:07.238 | 99.99th=[ 1188] 00:17:07.238 bw ( KiB/s): min= 4087, max= 4087, per=47.09%, avg=4087.00, stdev= 0.00, samples=1 00:17:07.238 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:17:07.238 lat (usec) : 250=0.19%, 500=28.60%, 750=40.53%, 1000=25.57% 00:17:07.238 lat (msec) : 2=2.08%, 50=3.03% 00:17:07.238 cpu : usr=0.30%, sys=1.28%, ctx=531, majf=0, minf=1 00:17:07.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.238 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.238 job3: (groupid=0, jobs=1): err= 0: pid=2269863: Mon Jul 15 16:08:42 2024 00:17:07.238 read: IOPS=461, BW=1846KiB/s (1890kB/s)(1848KiB/1001msec) 00:17:07.238 slat (nsec): min=7907, max=59610, avg=26024.70, stdev=4164.61 00:17:07.238 clat (usec): min=905, max=1360, avg=1167.98, stdev=70.32 00:17:07.238 lat (usec): min=931, max=1385, avg=1194.00, stdev=70.47 00:17:07.238 clat percentiles (usec): 00:17:07.238 | 1.00th=[ 955], 5.00th=[ 1037], 10.00th=[ 1074], 20.00th=[ 1123], 00:17:07.238 | 30.00th=[ 1156], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1188], 00:17:07.238 | 70.00th=[ 1205], 80.00th=[ 1221], 90.00th=[ 1254], 95.00th=[ 1270], 00:17:07.238 | 99.00th=[ 1303], 99.50th=[ 1352], 99.90th=[ 1369], 99.95th=[ 1369], 00:17:07.238 | 99.99th=[ 1369] 00:17:07.238 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:07.238 slat (nsec): min=9911, max=68691, avg=31248.62, stdev=6911.43 00:17:07.238 clat (usec): min=326, max=1144, avg=827.73, stdev=107.91 00:17:07.238 lat (usec): min=358, max=1176, avg=858.97, stdev=109.80 00:17:07.238 clat percentiles (usec): 00:17:07.238 | 1.00th=[ 519], 5.00th=[ 635], 10.00th=[ 693], 20.00th=[ 750], 00:17:07.238 | 30.00th=[ 783], 40.00th=[ 816], 50.00th=[ 840], 60.00th=[ 865], 00:17:07.238 | 70.00th=[ 889], 80.00th=[ 914], 90.00th=[ 947], 95.00th=[ 979], 00:17:07.238 | 99.00th=[ 1045], 99.50th=[ 1057], 99.90th=[ 1139], 99.95th=[ 1139], 00:17:07.238 | 99.99th=[ 1139] 00:17:07.238 bw ( KiB/s): min= 4096, max= 4096, per=47.19%, avg=4096.00, stdev= 0.00, samples=1 00:17:07.238 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:07.238 lat (usec) : 500=0.31%, 750=10.57%, 1000=41.07% 00:17:07.238 lat (msec) : 2=48.05% 00:17:07.238 cpu : usr=1.40%, sys=3.00%, ctx=977, majf=0, minf=1 00:17:07.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.238 issued rwts: total=462,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.238 00:17:07.238 Run status group 0 (all jobs): 00:17:07.238 READ: bw=3876KiB/s (3969kB/s), 54.1KiB/s-2046KiB/s (55.4kB/s-2095kB/s), io=4016KiB (4112kB), run=1001-1036msec 00:17:07.238 WRITE: bw=8680KiB/s (8888kB/s), 1977KiB/s-2845KiB/s (2024kB/s-2913kB/s), io=8992KiB (9208kB), run=1001-1036msec 00:17:07.238 00:17:07.238 Disk stats (read/write): 00:17:07.238 nvme0n1: ios=59/512, merge=0/0, ticks=437/339, in_queue=776, util=87.27% 00:17:07.238 nvme0n2: ios=518/512, merge=0/0, ticks=546/299, in_queue=845, util=88.28% 00:17:07.238 nvme0n3: ios=33/512, merge=0/0, ticks=1354/307, in_queue=1661, util=92.19% 00:17:07.238 nvme0n4: ios=385/512, merge=0/0, ticks=1191/397, in_queue=1588, util=94.34% 00:17:07.238 16:08:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:07.238 [global] 00:17:07.238 thread=1 00:17:07.238 invalidate=1 00:17:07.238 rw=write 00:17:07.238 time_based=1 00:17:07.238 runtime=1 00:17:07.238 ioengine=libaio 00:17:07.238 direct=1 00:17:07.238 bs=4096 00:17:07.238 iodepth=128 00:17:07.238 norandommap=0 00:17:07.238 numjobs=1 00:17:07.238 00:17:07.238 verify_dump=1 00:17:07.238 verify_backlog=512 00:17:07.238 verify_state_save=0 00:17:07.238 do_verify=1 00:17:07.238 verify=crc32c-intel 00:17:07.238 [job0] 00:17:07.238 filename=/dev/nvme0n1 00:17:07.238 [job1] 00:17:07.238 filename=/dev/nvme0n2 00:17:07.238 [job2] 00:17:07.238 filename=/dev/nvme0n3 00:17:07.238 [job3] 00:17:07.238 filename=/dev/nvme0n4 00:17:07.238 Could not set queue depth (nvme0n1) 00:17:07.238 Could not set queue depth (nvme0n2) 00:17:07.238 Could not set queue depth (nvme0n3) 00:17:07.238 Could not set queue depth (nvme0n4) 00:17:07.498 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:07.498 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:07.498 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:07.498 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:07.498 fio-3.35 00:17:07.498 Starting 4 threads 00:17:08.886 00:17:08.886 job0: (groupid=0, jobs=1): err= 0: pid=2270368: Mon Jul 15 16:08:44 2024 00:17:08.886 read: IOPS=5611, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:17:08.886 slat (nsec): min=896, max=18965k, avg=78102.21, stdev=611352.88 00:17:08.886 clat (usec): min=1681, max=32956, avg=10933.26, stdev=4640.20 00:17:08.886 lat (usec): min=1696, max=41385, avg=11011.36, stdev=4674.69 00:17:08.886 clat percentiles (usec): 00:17:08.886 | 1.00th=[ 2900], 5.00th=[ 5145], 10.00th=[ 5932], 20.00th=[ 6915], 00:17:08.887 | 30.00th=[ 7963], 40.00th=[ 8848], 50.00th=[10421], 60.00th=[11600], 00:17:08.887 | 70.00th=[12649], 80.00th=[14222], 90.00th=[16909], 95.00th=[20055], 00:17:08.887 | 99.00th=[25822], 99.50th=[25822], 99.90th=[32900], 99.95th=[32900], 00:17:08.887 | 99.99th=[32900] 00:17:08.887 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:17:08.887 slat (nsec): min=1605, max=10457k, avg=75831.09, stdev=544234.33 00:17:08.887 clat (usec): min=653, max=30029, avg=10712.14, stdev=5098.23 00:17:08.887 lat (usec): min=656, max=30060, avg=10787.97, stdev=5131.58 00:17:08.887 clat percentiles (usec): 00:17:08.887 | 1.00th=[ 1729], 5.00th=[ 2802], 10.00th=[ 4293], 20.00th=[ 6325], 00:17:08.887 | 30.00th=[ 7767], 40.00th=[ 9372], 50.00th=[10552], 60.00th=[11600], 00:17:08.887 | 70.00th=[12387], 80.00th=[14353], 90.00th=[17695], 95.00th=[21627], 00:17:08.887 | 99.00th=[25035], 99.50th=[25822], 99.90th=[29230], 99.95th=[29230], 00:17:08.887 | 99.99th=[30016] 00:17:08.887 bw ( KiB/s): min=20480, max=27712, per=27.26%, avg=24096.00, stdev=5113.80, samples=2 00:17:08.887 iops : min= 5120, max= 6928, avg=6024.00, stdev=1278.45, samples=2 00:17:08.887 lat (usec) : 750=0.03%, 1000=0.01% 00:17:08.887 lat (msec) : 2=1.13%, 4=4.39%, 10=40.16%, 20=48.63%, 50=5.66% 00:17:08.887 cpu : usr=3.78%, sys=6.67%, ctx=472, majf=0, minf=1 00:17:08.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:08.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.887 issued rwts: total=5640,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.887 job1: (groupid=0, jobs=1): err= 0: pid=2270369: Mon Jul 15 16:08:44 2024 00:17:08.887 read: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec) 00:17:08.887 slat (nsec): min=944, max=11773k, avg=68366.08, stdev=521933.52 00:17:08.887 clat (usec): min=2431, max=37743, avg=9129.20, stdev=3717.11 00:17:08.887 lat (usec): min=2446, max=37751, avg=9197.57, stdev=3763.09 00:17:08.887 clat percentiles (usec): 00:17:08.887 | 1.00th=[ 4490], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 6652], 00:17:08.887 | 30.00th=[ 7046], 40.00th=[ 7504], 50.00th=[ 8225], 60.00th=[ 9110], 00:17:08.887 | 70.00th=[ 9765], 80.00th=[10945], 90.00th=[12256], 95.00th=[16581], 00:17:08.887 | 99.00th=[24249], 99.50th=[30016], 99.90th=[36963], 99.95th=[37487], 00:17:08.887 | 99.99th=[37487] 00:17:08.887 write: IOPS=6846, BW=26.7MiB/s (28.0MB/s)(26.9MiB/1007msec); 0 zone resets 00:17:08.887 slat (nsec): min=1637, max=7302.9k, avg=65336.34, stdev=394207.66 00:17:08.887 clat (usec): min=950, max=37712, avg=9730.62, stdev=5646.68 00:17:08.887 lat (usec): min=958, max=37717, avg=9795.96, stdev=5681.08 00:17:08.887 clat percentiles (usec): 00:17:08.887 | 1.00th=[ 2114], 5.00th=[ 3785], 10.00th=[ 4178], 20.00th=[ 5342], 00:17:08.887 | 30.00th=[ 6063], 40.00th=[ 6652], 50.00th=[ 8225], 60.00th=[ 9241], 00:17:08.887 | 70.00th=[11076], 80.00th=[14222], 90.00th=[17957], 95.00th=[21365], 00:17:08.887 | 99.00th=[26608], 99.50th=[30016], 99.90th=[35390], 99.95th=[35390], 00:17:08.887 | 99.99th=[37487] 00:17:08.887 bw ( KiB/s): min=21536, max=32600, per=30.62%, avg=27068.00, stdev=7823.43, samples=2 00:17:08.887 iops : min= 5384, max= 8150, avg=6767.00, stdev=1955.86, samples=2 00:17:08.887 lat (usec) : 1000=0.03% 00:17:08.887 lat (msec) : 2=0.42%, 4=4.11%, 10=64.68%, 20=26.00%, 50=4.76% 00:17:08.887 cpu : usr=5.17%, sys=6.76%, ctx=521, majf=0, minf=1 00:17:08.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:08.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.887 issued rwts: total=6656,6894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.887 job2: (groupid=0, jobs=1): err= 0: pid=2270373: Mon Jul 15 16:08:44 2024 00:17:08.887 read: IOPS=4832, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1006msec) 00:17:08.887 slat (nsec): min=906, max=17403k, avg=111736.50, stdev=742703.77 00:17:08.887 clat (usec): min=2133, max=40315, avg=14459.25, stdev=6776.36 00:17:08.887 lat (usec): min=6263, max=40325, avg=14570.99, stdev=6805.28 00:17:08.887 clat percentiles (usec): 00:17:08.887 | 1.00th=[ 6521], 5.00th=[ 7832], 10.00th=[ 8455], 20.00th=[ 8979], 00:17:08.887 | 30.00th=[ 9503], 40.00th=[11338], 50.00th=[12387], 60.00th=[13435], 00:17:08.887 | 70.00th=[16712], 80.00th=[19006], 90.00th=[22938], 95.00th=[30540], 00:17:08.887 | 99.00th=[39584], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:17:08.887 | 99.99th=[40109] 00:17:08.887 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:17:08.887 slat (nsec): min=1545, max=7710.9k, avg=84514.42, stdev=489398.12 00:17:08.887 clat (usec): min=723, max=25263, avg=11096.36, stdev=4253.25 00:17:08.887 lat (usec): min=753, max=25289, avg=11180.88, stdev=4273.55 00:17:08.887 clat percentiles (usec): 00:17:08.887 | 1.00th=[ 3359], 5.00th=[ 5407], 10.00th=[ 5997], 20.00th=[ 7373], 00:17:08.887 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[10683], 60.00th=[11994], 00:17:08.887 | 70.00th=[12911], 80.00th=[14746], 90.00th=[17171], 95.00th=[19792], 00:17:08.887 | 99.00th=[22152], 99.50th=[22152], 99.90th=[22938], 99.95th=[22938], 00:17:08.887 | 99.99th=[25297] 00:17:08.887 bw ( KiB/s): min=16384, max=24576, per=23.17%, avg=20480.00, stdev=5792.62, samples=2 00:17:08.887 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:17:08.887 lat (usec) : 750=0.03% 00:17:08.887 lat (msec) : 2=0.01%, 4=0.89%, 10=37.75%, 20=51.44%, 50=9.88% 00:17:08.887 cpu : usr=3.68%, sys=4.28%, ctx=469, majf=0, minf=1 00:17:08.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:08.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.887 issued rwts: total=4861,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.887 job3: (groupid=0, jobs=1): err= 0: pid=2270375: Mon Jul 15 16:08:44 2024 00:17:08.887 read: IOPS=3746, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1007msec) 00:17:08.887 slat (nsec): min=934, max=15776k, avg=118243.57, stdev=766063.54 00:17:08.887 clat (usec): min=2356, max=42403, avg=14840.91, stdev=7401.71 00:17:08.887 lat (usec): min=6633, max=42431, avg=14959.15, stdev=7464.02 00:17:08.887 clat percentiles (usec): 00:17:08.887 | 1.00th=[ 7439], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[ 9765], 00:17:08.887 | 30.00th=[10290], 40.00th=[11207], 50.00th=[11600], 60.00th=[12780], 00:17:08.887 | 70.00th=[15270], 80.00th=[18220], 90.00th=[27395], 95.00th=[32637], 00:17:08.887 | 99.00th=[38011], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:17:08.887 | 99.99th=[42206] 00:17:08.887 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:17:08.887 slat (nsec): min=1638, max=10586k, avg=131137.94, stdev=736181.24 00:17:08.887 clat (usec): min=6347, max=67629, avg=17334.08, stdev=12625.67 00:17:08.887 lat (usec): min=6357, max=67637, avg=17465.21, stdev=12714.67 00:17:08.887 clat percentiles (usec): 00:17:08.887 | 1.00th=[ 6915], 5.00th=[ 7963], 10.00th=[ 8225], 20.00th=[ 9241], 00:17:08.887 | 30.00th=[10028], 40.00th=[11469], 50.00th=[13304], 60.00th=[14484], 00:17:08.887 | 70.00th=[16909], 80.00th=[21365], 90.00th=[31851], 95.00th=[53216], 00:17:08.887 | 99.00th=[62653], 99.50th=[64750], 99.90th=[67634], 99.95th=[67634], 00:17:08.887 | 99.99th=[67634] 00:17:08.887 bw ( KiB/s): min=15072, max=17696, per=18.53%, avg=16384.00, stdev=1855.45, samples=2 00:17:08.887 iops : min= 3768, max= 4424, avg=4096.00, stdev=463.86, samples=2 00:17:08.887 lat (msec) : 4=0.01%, 10=26.64%, 20=50.81%, 50=19.65%, 100=2.90% 00:17:08.887 cpu : usr=2.49%, sys=4.27%, ctx=396, majf=0, minf=1 00:17:08.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:08.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.887 issued rwts: total=3773,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.887 00:17:08.887 Run status group 0 (all jobs): 00:17:08.887 READ: bw=81.2MiB/s (85.1MB/s), 14.6MiB/s-25.8MiB/s (15.3MB/s-27.1MB/s), io=81.8MiB (85.7MB), run=1005-1007msec 00:17:08.887 WRITE: bw=86.3MiB/s (90.5MB/s), 15.9MiB/s-26.7MiB/s (16.7MB/s-28.0MB/s), io=86.9MiB (91.2MB), run=1005-1007msec 00:17:08.887 00:17:08.887 Disk stats (read/write): 00:17:08.887 nvme0n1: ios=4652/4743, merge=0/0, ticks=37212/31673, in_queue=68885, util=86.77% 00:17:08.887 nvme0n2: ios=5681/5687, merge=0/0, ticks=50757/51239, in_queue=101996, util=88.49% 00:17:08.887 nvme0n3: ios=4154/4286, merge=0/0, ticks=31954/23553, in_queue=55507, util=95.05% 00:17:08.887 nvme0n4: ios=3285/3584, merge=0/0, ticks=26152/25762, in_queue=51914, util=94.24% 00:17:08.887 16:08:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:08.887 [global] 00:17:08.887 thread=1 00:17:08.887 invalidate=1 00:17:08.887 rw=randwrite 00:17:08.887 time_based=1 00:17:08.887 runtime=1 00:17:08.887 ioengine=libaio 00:17:08.887 direct=1 00:17:08.887 bs=4096 00:17:08.887 iodepth=128 00:17:08.887 norandommap=0 00:17:08.887 numjobs=1 00:17:08.887 00:17:08.887 verify_dump=1 00:17:08.887 verify_backlog=512 00:17:08.887 verify_state_save=0 00:17:08.887 do_verify=1 00:17:08.887 verify=crc32c-intel 00:17:08.887 [job0] 00:17:08.887 filename=/dev/nvme0n1 00:17:08.887 [job1] 00:17:08.887 filename=/dev/nvme0n2 00:17:08.887 [job2] 00:17:08.887 filename=/dev/nvme0n3 00:17:08.887 [job3] 00:17:08.887 filename=/dev/nvme0n4 00:17:08.887 Could not set queue depth (nvme0n1) 00:17:08.887 Could not set queue depth (nvme0n2) 00:17:08.887 Could not set queue depth (nvme0n3) 00:17:08.887 Could not set queue depth (nvme0n4) 00:17:09.146 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:09.146 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:09.146 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:09.146 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:09.146 fio-3.35 00:17:09.146 Starting 4 threads 00:17:10.152 00:17:10.152 job0: (groupid=0, jobs=1): err= 0: pid=2270894: Mon Jul 15 16:08:45 2024 00:17:10.152 read: IOPS=3969, BW=15.5MiB/s (16.3MB/s)(15.5MiB/1002msec) 00:17:10.152 slat (nsec): min=853, max=23287k, avg=133280.18, stdev=983764.73 00:17:10.152 clat (usec): min=714, max=76650, avg=16119.21, stdev=13676.40 00:17:10.152 lat (usec): min=2551, max=76656, avg=16252.49, stdev=13772.04 00:17:10.152 clat percentiles (usec): 00:17:10.152 | 1.00th=[ 3032], 5.00th=[ 6456], 10.00th=[ 8094], 20.00th=[ 8717], 00:17:10.152 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[13829], 00:17:10.152 | 70.00th=[16909], 80.00th=[19530], 90.00th=[30802], 95.00th=[48497], 00:17:10.152 | 99.00th=[76022], 99.50th=[76022], 99.90th=[77071], 99.95th=[77071], 00:17:10.152 | 99.99th=[77071] 00:17:10.152 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:17:10.152 slat (nsec): min=1517, max=13198k, avg=107320.39, stdev=750664.60 00:17:10.152 clat (usec): min=2371, max=67689, avg=15285.62, stdev=10171.20 00:17:10.152 lat (usec): min=2386, max=79304, avg=15392.94, stdev=10225.25 00:17:10.152 clat percentiles (usec): 00:17:10.152 | 1.00th=[ 4228], 5.00th=[ 7570], 10.00th=[ 8586], 20.00th=[ 8848], 00:17:10.152 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[10552], 60.00th=[11469], 00:17:10.152 | 70.00th=[14484], 80.00th=[24249], 90.00th=[29754], 95.00th=[34341], 00:17:10.152 | 99.00th=[59507], 99.50th=[62129], 99.90th=[67634], 99.95th=[67634], 00:17:10.152 | 99.99th=[67634] 00:17:10.152 bw ( KiB/s): min=11784, max=20984, per=20.10%, avg=16384.00, stdev=6505.38, samples=2 00:17:10.152 iops : min= 2946, max= 5246, avg=4096.00, stdev=1626.35, samples=2 00:17:10.152 lat (usec) : 750=0.01% 00:17:10.152 lat (msec) : 4=1.65%, 10=46.60%, 20=29.67%, 50=18.85%, 100=3.22% 00:17:10.152 cpu : usr=3.10%, sys=4.10%, ctx=339, majf=0, minf=1 00:17:10.152 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:10.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:10.152 issued rwts: total=3977,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:10.152 job1: (groupid=0, jobs=1): err= 0: pid=2270895: Mon Jul 15 16:08:45 2024 00:17:10.152 read: IOPS=4390, BW=17.1MiB/s (18.0MB/s)(17.2MiB/1005msec) 00:17:10.152 slat (nsec): min=910, max=16152k, avg=116904.54, stdev=897703.73 00:17:10.152 clat (usec): min=1084, max=57219, avg=14040.28, stdev=11899.46 00:17:10.152 lat (usec): min=4433, max=57224, avg=14157.19, stdev=11982.12 00:17:10.152 clat percentiles (usec): 00:17:10.152 | 1.00th=[ 5211], 5.00th=[ 6063], 10.00th=[ 6390], 20.00th=[ 7242], 00:17:10.152 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9503], 00:17:10.152 | 70.00th=[11207], 80.00th=[15795], 90.00th=[35914], 95.00th=[45351], 00:17:10.152 | 99.00th=[55837], 99.50th=[56361], 99.90th=[57410], 99.95th=[57410], 00:17:10.152 | 99.99th=[57410] 00:17:10.152 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:17:10.152 slat (nsec): min=1542, max=13529k, avg=100764.94, stdev=669917.81 00:17:10.152 clat (usec): min=3534, max=52170, avg=14108.91, stdev=10123.87 00:17:10.152 lat (usec): min=3544, max=52175, avg=14209.68, stdev=10169.29 00:17:10.152 clat percentiles (usec): 00:17:10.152 | 1.00th=[ 4555], 5.00th=[ 6063], 10.00th=[ 6849], 20.00th=[ 7635], 00:17:10.152 | 30.00th=[ 8225], 40.00th=[ 9110], 50.00th=[10159], 60.00th=[11731], 00:17:10.152 | 70.00th=[13435], 80.00th=[17695], 90.00th=[30802], 95.00th=[41157], 00:17:10.152 | 99.00th=[46924], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:17:10.152 | 99.99th=[52167] 00:17:10.152 bw ( KiB/s): min=17392, max=19472, per=22.61%, avg=18432.00, stdev=1470.78, samples=2 00:17:10.152 iops : min= 4348, max= 4868, avg=4608.00, stdev=367.70, samples=2 00:17:10.153 lat (msec) : 2=0.01%, 4=0.06%, 10=55.47%, 20=26.06%, 50=16.86% 00:17:10.153 lat (msec) : 100=1.54% 00:17:10.153 cpu : usr=2.49%, sys=3.88%, ctx=488, majf=0, minf=1 00:17:10.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:10.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:10.153 issued rwts: total=4412,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:10.153 job2: (groupid=0, jobs=1): err= 0: pid=2270896: Mon Jul 15 16:08:45 2024 00:17:10.153 read: IOPS=5629, BW=22.0MiB/s (23.1MB/s)(22.1MiB/1004msec) 00:17:10.153 slat (nsec): min=908, max=28502k, avg=86945.82, stdev=671583.07 00:17:10.153 clat (usec): min=3278, max=43158, avg=11821.85, stdev=5619.87 00:17:10.153 lat (usec): min=3283, max=43163, avg=11908.79, stdev=5653.42 00:17:10.153 clat percentiles (usec): 00:17:10.153 | 1.00th=[ 4883], 5.00th=[ 6390], 10.00th=[ 7111], 20.00th=[ 8356], 00:17:10.153 | 30.00th=[ 9110], 40.00th=[10159], 50.00th=[10814], 60.00th=[11469], 00:17:10.153 | 70.00th=[12518], 80.00th=[13829], 90.00th=[17433], 95.00th=[19006], 00:17:10.153 | 99.00th=[40109], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:17:10.153 | 99.99th=[43254] 00:17:10.153 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:17:10.153 slat (nsec): min=1569, max=8632.1k, avg=75639.54, stdev=521605.78 00:17:10.153 clat (usec): min=2777, max=25620, avg=9849.55, stdev=3332.19 00:17:10.153 lat (usec): min=2786, max=25651, avg=9925.19, stdev=3350.05 00:17:10.153 clat percentiles (usec): 00:17:10.153 | 1.00th=[ 3916], 5.00th=[ 5342], 10.00th=[ 6259], 20.00th=[ 7111], 00:17:10.153 | 30.00th=[ 7701], 40.00th=[ 8356], 50.00th=[ 9634], 60.00th=[10290], 00:17:10.153 | 70.00th=[10945], 80.00th=[12518], 90.00th=[14746], 95.00th=[16909], 00:17:10.153 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19268], 99.95th=[21103], 00:17:10.153 | 99.99th=[25560] 00:17:10.153 bw ( KiB/s): min=23720, max=24576, per=29.62%, avg=24148.00, stdev=605.28, samples=2 00:17:10.153 iops : min= 5930, max= 6144, avg=6037.00, stdev=151.32, samples=2 00:17:10.153 lat (msec) : 4=0.92%, 10=46.83%, 20=49.96%, 50=2.29% 00:17:10.153 cpu : usr=3.79%, sys=6.88%, ctx=371, majf=0, minf=1 00:17:10.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:10.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:10.153 issued rwts: total=5652,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:10.153 job3: (groupid=0, jobs=1): err= 0: pid=2270897: Mon Jul 15 16:08:45 2024 00:17:10.153 read: IOPS=5448, BW=21.3MiB/s (22.3MB/s)(21.4MiB/1005msec) 00:17:10.153 slat (nsec): min=923, max=10690k, avg=90281.29, stdev=601981.58 00:17:10.153 clat (usec): min=1165, max=28476, avg=12038.03, stdev=3814.18 00:17:10.153 lat (usec): min=4049, max=28483, avg=12128.31, stdev=3834.72 00:17:10.153 clat percentiles (usec): 00:17:10.153 | 1.00th=[ 6194], 5.00th=[ 7635], 10.00th=[ 8356], 20.00th=[ 9241], 00:17:10.153 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[11207], 60.00th=[11994], 00:17:10.153 | 70.00th=[12780], 80.00th=[13829], 90.00th=[16909], 95.00th=[20317], 00:17:10.153 | 99.00th=[26346], 99.50th=[27919], 99.90th=[28443], 99.95th=[28443], 00:17:10.153 | 99.99th=[28443] 00:17:10.153 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:17:10.153 slat (nsec): min=1529, max=11740k, avg=82460.17, stdev=537357.27 00:17:10.153 clat (usec): min=1198, max=33651, avg=10927.04, stdev=4223.83 00:17:10.153 lat (usec): min=1207, max=33659, avg=11009.50, stdev=4243.61 00:17:10.153 clat percentiles (usec): 00:17:10.153 | 1.00th=[ 4359], 5.00th=[ 5997], 10.00th=[ 6390], 20.00th=[ 7767], 00:17:10.153 | 30.00th=[ 8455], 40.00th=[ 9372], 50.00th=[10028], 60.00th=[10814], 00:17:10.153 | 70.00th=[12125], 80.00th=[13435], 90.00th=[16712], 95.00th=[17957], 00:17:10.153 | 99.00th=[29492], 99.50th=[29492], 99.90th=[33817], 99.95th=[33817], 00:17:10.153 | 99.99th=[33817] 00:17:10.153 bw ( KiB/s): min=20488, max=24568, per=27.64%, avg=22528.00, stdev=2885.00, samples=2 00:17:10.153 iops : min= 5122, max= 6142, avg=5632.00, stdev=721.25, samples=2 00:17:10.153 lat (msec) : 2=0.10%, 4=0.26%, 10=40.96%, 20=54.96%, 50=3.72% 00:17:10.153 cpu : usr=3.98%, sys=5.98%, ctx=441, majf=0, minf=1 00:17:10.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:10.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:10.153 issued rwts: total=5476,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:10.153 00:17:10.153 Run status group 0 (all jobs): 00:17:10.153 READ: bw=75.9MiB/s (79.5MB/s), 15.5MiB/s-22.0MiB/s (16.3MB/s-23.1MB/s), io=76.2MiB (79.9MB), run=1002-1005msec 00:17:10.153 WRITE: bw=79.6MiB/s (83.5MB/s), 16.0MiB/s-23.9MiB/s (16.7MB/s-25.1MB/s), io=80.0MiB (83.9MB), run=1002-1005msec 00:17:10.153 00:17:10.153 Disk stats (read/write): 00:17:10.153 nvme0n1: ios=2738/3072, merge=0/0, ticks=23718/17642, in_queue=41360, util=86.87% 00:17:10.153 nvme0n2: ios=3621/3639, merge=0/0, ticks=15515/15392, in_queue=30907, util=97.96% 00:17:10.153 nvme0n3: ios=4920/5120, merge=0/0, ticks=43244/35440, in_queue=78684, util=98.84% 00:17:10.153 nvme0n4: ios=4681/5120, merge=0/0, ticks=44437/42766, in_queue=87203, util=91.79% 00:17:10.153 16:08:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:10.414 16:08:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2271231 00:17:10.414 16:08:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:10.414 16:08:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:10.414 [global] 00:17:10.414 thread=1 00:17:10.414 invalidate=1 00:17:10.414 rw=read 00:17:10.414 time_based=1 00:17:10.414 runtime=10 00:17:10.414 ioengine=libaio 00:17:10.414 direct=1 00:17:10.414 bs=4096 00:17:10.414 iodepth=1 00:17:10.414 norandommap=1 00:17:10.414 numjobs=1 00:17:10.414 00:17:10.414 [job0] 00:17:10.414 filename=/dev/nvme0n1 00:17:10.414 [job1] 00:17:10.414 filename=/dev/nvme0n2 00:17:10.414 [job2] 00:17:10.414 filename=/dev/nvme0n3 00:17:10.414 [job3] 00:17:10.414 filename=/dev/nvme0n4 00:17:10.414 Could not set queue depth (nvme0n1) 00:17:10.414 Could not set queue depth (nvme0n2) 00:17:10.414 Could not set queue depth (nvme0n3) 00:17:10.414 Could not set queue depth (nvme0n4) 00:17:10.674 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:10.674 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:10.674 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:10.674 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:10.674 fio-3.35 00:17:10.674 Starting 4 threads 00:17:13.216 16:08:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:13.476 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=7802880, buflen=4096 00:17:13.476 fio: pid=2271421, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:13.476 16:08:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:13.736 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=8269824, buflen=4096 00:17:13.736 fio: pid=2271420, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:13.736 16:08:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:13.736 16:08:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:13.736 16:08:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:13.736 16:08:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:13.736 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=6729728, buflen=4096 00:17:13.736 fio: pid=2271418, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:13.997 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=303104, buflen=4096 00:17:13.997 fio: pid=2271419, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:13.997 16:08:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:13.997 16:08:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:13.997 00:17:13.997 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2271418: Mon Jul 15 16:08:49 2024 00:17:13.997 read: IOPS=555, BW=2219KiB/s (2272kB/s)(6572KiB/2962msec) 00:17:13.997 slat (usec): min=2, max=13510, avg=36.66, stdev=437.92 00:17:13.997 clat (usec): min=320, max=43131, avg=1744.84, stdev=6314.07 00:17:13.997 lat (usec): min=330, max=55002, avg=1781.51, stdev=6380.99 00:17:13.997 clat percentiles (usec): 00:17:13.997 | 1.00th=[ 412], 5.00th=[ 515], 10.00th=[ 545], 20.00th=[ 578], 00:17:13.997 | 30.00th=[ 619], 40.00th=[ 685], 50.00th=[ 742], 60.00th=[ 807], 00:17:13.997 | 70.00th=[ 857], 80.00th=[ 922], 90.00th=[ 1004], 95.00th=[ 1336], 00:17:13.997 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:17:13.997 | 99.99th=[43254] 00:17:13.997 bw ( KiB/s): min= 312, max= 4520, per=35.46%, avg=2601.60, stdev=2100.78, samples=5 00:17:13.997 iops : min= 78, max= 1130, avg=650.40, stdev=525.20, samples=5 00:17:13.997 lat (usec) : 500=3.71%, 750=47.51%, 1000=38.69% 00:17:13.997 lat (msec) : 2=7.66%, 50=2.37% 00:17:13.997 cpu : usr=0.47%, sys=1.18%, ctx=1647, majf=0, minf=1 00:17:13.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:13.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.997 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.997 issued rwts: total=1644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:13.997 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2271419: Mon Jul 15 16:08:49 2024 00:17:13.997 read: IOPS=24, BW=96.2KiB/s (98.5kB/s)(296KiB/3076msec) 00:17:13.997 slat (usec): min=24, max=8525, avg=424.10, stdev=1716.38 00:17:13.997 clat (usec): min=1268, max=43130, avg=41119.68, stdev=6697.42 00:17:13.997 lat (usec): min=1293, max=51002, avg=41448.08, stdev=6899.51 00:17:13.997 clat percentiles (usec): 00:17:13.997 | 1.00th=[ 1270], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:13.997 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:13.997 | 70.00th=[42206], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:17:13.997 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:13.997 | 99.99th=[43254] 00:17:13.997 bw ( KiB/s): min= 96, max= 104, per=1.32%, avg=97.60, stdev= 3.58, samples=5 00:17:13.997 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:17:13.997 lat (msec) : 2=2.67%, 50=96.00% 00:17:13.997 cpu : usr=0.00%, sys=0.33%, ctx=80, majf=0, minf=1 00:17:13.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:13.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.997 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.997 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:13.997 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2271420: Mon Jul 15 16:08:49 2024 00:17:13.997 read: IOPS=733, BW=2931KiB/s (3002kB/s)(8076KiB/2755msec) 00:17:13.997 slat (usec): min=8, max=9408, avg=32.55, stdev=256.66 00:17:13.997 clat (usec): min=821, max=1602, avg=1312.97, stdev=68.02 00:17:13.997 lat (usec): min=878, max=10667, avg=1345.52, stdev=264.54 00:17:13.997 clat percentiles (usec): 00:17:13.997 | 1.00th=[ 1106], 5.00th=[ 1205], 10.00th=[ 1237], 20.00th=[ 1270], 00:17:13.997 | 30.00th=[ 1287], 40.00th=[ 1303], 50.00th=[ 1319], 60.00th=[ 1336], 00:17:13.997 | 70.00th=[ 1352], 80.00th=[ 1369], 90.00th=[ 1385], 95.00th=[ 1418], 00:17:13.997 | 99.00th=[ 1467], 99.50th=[ 1500], 99.90th=[ 1565], 99.95th=[ 1598], 00:17:13.997 | 99.99th=[ 1598] 00:17:13.997 bw ( KiB/s): min= 2952, max= 2992, per=40.52%, avg=2972.80, stdev=14.53, samples=5 00:17:13.997 iops : min= 738, max= 748, avg=743.20, stdev= 3.63, samples=5 00:17:13.997 lat (usec) : 1000=0.20% 00:17:13.997 lat (msec) : 2=99.75% 00:17:13.997 cpu : usr=0.84%, sys=2.11%, ctx=2025, majf=0, minf=1 00:17:13.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:13.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.997 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.997 issued rwts: total=2020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:13.997 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2271421: Mon Jul 15 16:08:49 2024 00:17:13.997 read: IOPS=735, BW=2942KiB/s (3013kB/s)(7620KiB/2590msec) 00:17:13.997 slat (nsec): min=8680, max=61638, avg=24768.98, stdev=3410.60 00:17:13.997 clat (usec): min=895, max=1639, avg=1314.94, stdev=67.48 00:17:13.997 lat (usec): min=928, max=1663, avg=1339.70, stdev=67.40 00:17:13.997 clat percentiles (usec): 00:17:13.997 | 1.00th=[ 1139], 5.00th=[ 1205], 10.00th=[ 1237], 20.00th=[ 1270], 00:17:13.997 | 30.00th=[ 1287], 40.00th=[ 1303], 50.00th=[ 1319], 60.00th=[ 1336], 00:17:13.997 | 70.00th=[ 1352], 80.00th=[ 1369], 90.00th=[ 1385], 95.00th=[ 1418], 00:17:13.997 | 99.00th=[ 1500], 99.50th=[ 1532], 99.90th=[ 1598], 99.95th=[ 1647], 00:17:13.997 | 99.99th=[ 1647] 00:17:13.997 bw ( KiB/s): min= 2952, max= 2984, per=40.52%, avg=2972.80, stdev=13.39, samples=5 00:17:13.997 iops : min= 738, max= 746, avg=743.20, stdev= 3.35, samples=5 00:17:13.997 lat (usec) : 1000=0.10% 00:17:13.997 lat (msec) : 2=99.84% 00:17:13.997 cpu : usr=0.66%, sys=2.28%, ctx=1907, majf=0, minf=2 00:17:13.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:13.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.997 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.997 issued rwts: total=1906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:13.997 00:17:13.997 Run status group 0 (all jobs): 00:17:13.997 READ: bw=7336KiB/s (7512kB/s), 96.2KiB/s-2942KiB/s (98.5kB/s-3013kB/s), io=22.0MiB (23.1MB), run=2590-3076msec 00:17:13.997 00:17:13.997 Disk stats (read/write): 00:17:13.997 nvme0n1: ios=1665/0, merge=0/0, ticks=2790/0, in_queue=2790, util=95.63% 00:17:13.997 nvme0n2: ios=96/0, merge=0/0, ticks=2988/0, in_queue=2988, util=99.70% 00:17:13.997 nvme0n3: ios=1921/0, merge=0/0, ticks=2485/0, in_queue=2485, util=96.03% 00:17:13.997 nvme0n4: ios=1906/0, merge=0/0, ticks=2470/0, in_queue=2470, util=96.09% 00:17:13.997 16:08:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:13.997 16:08:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:14.256 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:14.256 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:14.516 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:14.516 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:14.516 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:14.516 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:14.776 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:14.777 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2271231 00:17:14.777 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:14.777 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:14.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.777 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:14.777 16:08:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:14.777 16:08:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:14.777 16:08:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:14.777 16:08:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:14.777 16:08:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:15.037 nvmf hotplug test: fio failed as expected 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:15.037 rmmod nvme_tcp 00:17:15.037 rmmod nvme_fabrics 00:17:15.037 rmmod nvme_keyring 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2267714 ']' 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2267714 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2267714 ']' 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2267714 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:15.037 16:08:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2267714 00:17:15.297 16:08:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:15.297 16:08:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:15.297 16:08:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2267714' 00:17:15.297 killing process with pid 2267714 00:17:15.297 16:08:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2267714 00:17:15.297 16:08:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2267714 00:17:15.297 16:08:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:15.297 16:08:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:15.297 16:08:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:15.297 16:08:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:15.297 16:08:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:15.297 16:08:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.297 16:08:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.297 16:08:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.840 16:08:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:17.840 00:17:17.840 real 0m28.293s 00:17:17.841 user 2m32.304s 00:17:17.841 sys 0m9.176s 00:17:17.841 16:08:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:17.841 16:08:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.841 ************************************ 00:17:17.841 END TEST nvmf_fio_target 00:17:17.841 ************************************ 00:17:17.841 16:08:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:17.841 16:08:53 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:17.841 16:08:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:17.841 16:08:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.841 16:08:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:17.841 ************************************ 00:17:17.841 START TEST nvmf_bdevio 00:17:17.841 ************************************ 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:17.841 * Looking for test storage... 00:17:17.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:17.841 16:08:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:24.442 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:24.442 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:24.442 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:24.442 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:24.442 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:24.442 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:24.442 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:24.442 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:24.442 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:24.442 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:24.442 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:24.442 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:24.442 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:24.442 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:24.442 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:24.442 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:24.443 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:24.443 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:24.443 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:24.443 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:24.443 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:24.704 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:24.704 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:24.704 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:24.704 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:24.704 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:24.704 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:24.704 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:24.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:17:24.704 00:17:24.704 --- 10.0.0.2 ping statistics --- 00:17:24.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.704 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:17:24.704 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:24.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.382 ms 00:17:24.704 00:17:24.704 --- 10.0.0.1 ping statistics --- 00:17:24.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.704 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:17:24.704 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.704 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:24.704 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:24.704 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.704 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:24.704 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:24.704 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.704 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:24.704 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:24.966 16:09:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:24.966 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:24.966 16:09:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:24.966 16:09:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:24.966 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2276468 00:17:24.966 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2276468 00:17:24.966 16:09:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:24.966 16:09:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2276468 ']' 00:17:24.966 16:09:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.966 16:09:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.966 16:09:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.966 16:09:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.966 16:09:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:24.966 [2024-07-15 16:09:00.612526] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:17:24.966 [2024-07-15 16:09:00.612574] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.966 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.966 [2024-07-15 16:09:00.695057] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:24.966 [2024-07-15 16:09:00.767091] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.966 [2024-07-15 16:09:00.767141] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.966 [2024-07-15 16:09:00.767149] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.966 [2024-07-15 16:09:00.767155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.966 [2024-07-15 16:09:00.767161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.966 [2024-07-15 16:09:00.767350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:24.966 [2024-07-15 16:09:00.767582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:24.966 [2024-07-15 16:09:00.767736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:24.966 [2024-07-15 16:09:00.767737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:25.910 [2024-07-15 16:09:01.454467] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:25.910 Malloc0 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:25.910 [2024-07-15 16:09:01.522841] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:25.910 { 00:17:25.910 "params": { 00:17:25.910 "name": "Nvme$subsystem", 00:17:25.910 "trtype": "$TEST_TRANSPORT", 00:17:25.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:25.910 "adrfam": "ipv4", 00:17:25.910 "trsvcid": "$NVMF_PORT", 00:17:25.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:25.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:25.910 "hdgst": ${hdgst:-false}, 00:17:25.910 "ddgst": ${ddgst:-false} 00:17:25.910 }, 00:17:25.910 "method": "bdev_nvme_attach_controller" 00:17:25.910 } 00:17:25.910 EOF 00:17:25.910 )") 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:25.910 16:09:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:25.910 "params": { 00:17:25.910 "name": "Nvme1", 00:17:25.910 "trtype": "tcp", 00:17:25.910 "traddr": "10.0.0.2", 00:17:25.910 "adrfam": "ipv4", 00:17:25.910 "trsvcid": "4420", 00:17:25.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:25.910 "hdgst": false, 00:17:25.910 "ddgst": false 00:17:25.910 }, 00:17:25.910 "method": "bdev_nvme_attach_controller" 00:17:25.910 }' 00:17:25.910 [2024-07-15 16:09:01.579161] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:17:25.910 [2024-07-15 16:09:01.579226] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2276870 ] 00:17:25.910 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.910 [2024-07-15 16:09:01.644116] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:25.910 [2024-07-15 16:09:01.719639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.910 [2024-07-15 16:09:01.719758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.910 [2024-07-15 16:09:01.719761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.170 I/O targets: 00:17:26.170 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:26.170 00:17:26.170 00:17:26.170 CUnit - A unit testing framework for C - Version 2.1-3 00:17:26.170 http://cunit.sourceforge.net/ 00:17:26.171 00:17:26.171 00:17:26.171 Suite: bdevio tests on: Nvme1n1 00:17:26.171 Test: blockdev write read block ...passed 00:17:26.171 Test: blockdev write zeroes read block ...passed 00:17:26.171 Test: blockdev write zeroes read no split ...passed 00:17:26.171 Test: blockdev write zeroes read split ...passed 00:17:26.171 Test: blockdev write zeroes read split partial ...passed 00:17:26.171 Test: blockdev reset ...[2024-07-15 16:09:02.002917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:26.171 [2024-07-15 16:09:02.002969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf0ce0 (9): Bad file descriptor 00:17:26.431 [2024-07-15 16:09:02.152913] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:26.431 passed 00:17:26.431 Test: blockdev write read 8 blocks ...passed 00:17:26.431 Test: blockdev write read size > 128k ...passed 00:17:26.431 Test: blockdev write read invalid size ...passed 00:17:26.431 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:26.431 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:26.431 Test: blockdev write read max offset ...passed 00:17:26.692 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:26.692 Test: blockdev writev readv 8 blocks ...passed 00:17:26.692 Test: blockdev writev readv 30 x 1block ...passed 00:17:26.692 Test: blockdev writev readv block ...passed 00:17:26.692 Test: blockdev writev readv size > 128k ...passed 00:17:26.692 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:26.692 Test: blockdev comparev and writev ...[2024-07-15 16:09:02.378822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.692 [2024-07-15 16:09:02.378847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.692 [2024-07-15 16:09:02.378858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.692 [2024-07-15 16:09:02.378863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:26.692 [2024-07-15 16:09:02.379319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.692 [2024-07-15 16:09:02.379329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:26.692 [2024-07-15 16:09:02.379338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.692 [2024-07-15 16:09:02.379343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:26.692 [2024-07-15 16:09:02.379787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.692 [2024-07-15 16:09:02.379795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:26.692 [2024-07-15 16:09:02.379804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.692 [2024-07-15 16:09:02.379809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:26.692 [2024-07-15 16:09:02.380262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.692 [2024-07-15 16:09:02.380270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:26.692 [2024-07-15 16:09:02.380280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.692 [2024-07-15 16:09:02.380285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:26.692 passed 00:17:26.692 Test: blockdev nvme passthru rw ...passed 00:17:26.692 Test: blockdev nvme passthru vendor specific ...[2024-07-15 16:09:02.464848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:26.692 [2024-07-15 16:09:02.464860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:26.692 [2024-07-15 16:09:02.465160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:26.692 [2024-07-15 16:09:02.465168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:26.692 [2024-07-15 16:09:02.465502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:26.692 [2024-07-15 16:09:02.465509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:26.692 [2024-07-15 16:09:02.465816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:26.692 [2024-07-15 16:09:02.465827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:26.692 passed 00:17:26.692 Test: blockdev nvme admin passthru ...passed 00:17:26.692 Test: blockdev copy ...passed 00:17:26.692 00:17:26.692 Run Summary: Type Total Ran Passed Failed Inactive 00:17:26.692 suites 1 1 n/a 0 0 00:17:26.692 tests 23 23 23 0 0 00:17:26.692 asserts 152 152 152 0 n/a 00:17:26.692 00:17:26.692 Elapsed time = 1.332 seconds 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:26.953 rmmod nvme_tcp 00:17:26.953 rmmod nvme_fabrics 00:17:26.953 rmmod nvme_keyring 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2276468 ']' 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2276468 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2276468 ']' 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2276468 00:17:26.953 16:09:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:26.954 16:09:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:26.954 16:09:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2276468 00:17:27.215 16:09:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:27.215 16:09:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:27.215 16:09:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2276468' 00:17:27.215 killing process with pid 2276468 00:17:27.215 16:09:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2276468 00:17:27.215 16:09:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2276468 00:17:27.215 16:09:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:27.215 16:09:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:27.215 16:09:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:27.215 16:09:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:27.215 16:09:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:27.215 16:09:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.215 16:09:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.215 16:09:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.763 16:09:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:29.763 00:17:29.763 real 0m11.845s 00:17:29.763 user 0m12.908s 00:17:29.763 sys 0m5.872s 00:17:29.763 16:09:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:29.763 16:09:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:29.763 ************************************ 00:17:29.763 END TEST nvmf_bdevio 00:17:29.763 ************************************ 00:17:29.763 16:09:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:29.763 16:09:05 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:29.763 16:09:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:29.763 16:09:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:29.763 16:09:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:29.763 ************************************ 00:17:29.763 START TEST nvmf_auth_target 00:17:29.763 ************************************ 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:29.763 * Looking for test storage... 00:17:29.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:29.763 16:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.380 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:36.381 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:36.381 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:36.381 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:36.381 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.381 16:09:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.381 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.381 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.381 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:36.381 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.381 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.381 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:36.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:17:36.642 00:17:36.642 --- 10.0.0.2 ping statistics --- 00:17:36.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.642 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:17:36.642 00:17:36.642 --- 10.0.0.1 ping statistics --- 00:17:36.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.642 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2281698 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2281698 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2281698 ']' 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.642 16:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.584 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.584 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:37.584 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:37.584 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:37.584 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.584 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2281746 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8914f3b4eaedc25a2209363c6a811d36bf76addd535e95be 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.wNp 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8914f3b4eaedc25a2209363c6a811d36bf76addd535e95be 0 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8914f3b4eaedc25a2209363c6a811d36bf76addd535e95be 0 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8914f3b4eaedc25a2209363c6a811d36bf76addd535e95be 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.wNp 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.wNp 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.wNp 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=98c3d0860e56eb518163627ae1b9c861d7007892121c05445f821a2188d2b308 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.FTR 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 98c3d0860e56eb518163627ae1b9c861d7007892121c05445f821a2188d2b308 3 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 98c3d0860e56eb518163627ae1b9c861d7007892121c05445f821a2188d2b308 3 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=98c3d0860e56eb518163627ae1b9c861d7007892121c05445f821a2188d2b308 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.FTR 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.FTR 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.FTR 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7970c4dacade52211fec108d790eb0ac 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.x0p 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7970c4dacade52211fec108d790eb0ac 1 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7970c4dacade52211fec108d790eb0ac 1 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7970c4dacade52211fec108d790eb0ac 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.x0p 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.x0p 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.x0p 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=325f2f5bbe500450e64996bbf5bdfb6ff25e2b7f928a3fc7 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Uy8 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 325f2f5bbe500450e64996bbf5bdfb6ff25e2b7f928a3fc7 2 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 325f2f5bbe500450e64996bbf5bdfb6ff25e2b7f928a3fc7 2 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=325f2f5bbe500450e64996bbf5bdfb6ff25e2b7f928a3fc7 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Uy8 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Uy8 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Uy8 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:37.585 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f6d1d74b6591c49e3a92b7ce3a043d35674d9eccb097b94c 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.TrO 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f6d1d74b6591c49e3a92b7ce3a043d35674d9eccb097b94c 2 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f6d1d74b6591c49e3a92b7ce3a043d35674d9eccb097b94c 2 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f6d1d74b6591c49e3a92b7ce3a043d35674d9eccb097b94c 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.TrO 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.TrO 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.TrO 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=304776a24584f72dada6eccd2a917b98 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:37.847 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.K82 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 304776a24584f72dada6eccd2a917b98 1 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 304776a24584f72dada6eccd2a917b98 1 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=304776a24584f72dada6eccd2a917b98 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.K82 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.K82 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.K82 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=545cf40793771835f2e6f947ffc00eefb59e0c2ce5af3f5abd4b4e0513b873d1 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.k6x 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 545cf40793771835f2e6f947ffc00eefb59e0c2ce5af3f5abd4b4e0513b873d1 3 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 545cf40793771835f2e6f947ffc00eefb59e0c2ce5af3f5abd4b4e0513b873d1 3 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=545cf40793771835f2e6f947ffc00eefb59e0c2ce5af3f5abd4b4e0513b873d1 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.k6x 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.k6x 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.k6x 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2281698 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2281698 ']' 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:37.848 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2281746 /var/tmp/host.sock 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2281746 ']' 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:38.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wNp 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.108 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.369 16:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.369 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.wNp 00:17:38.369 16:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.wNp 00:17:38.369 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.FTR ]] 00:17:38.369 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FTR 00:17:38.369 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.369 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.369 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.369 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FTR 00:17:38.369 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FTR 00:17:38.629 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:38.629 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.x0p 00:17:38.629 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.629 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.629 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.629 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.x0p 00:17:38.629 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.x0p 00:17:38.629 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Uy8 ]] 00:17:38.629 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Uy8 00:17:38.629 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.629 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.629 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.629 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Uy8 00:17:38.629 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Uy8 00:17:38.889 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:38.889 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.TrO 00:17:38.889 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.889 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.889 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.889 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.TrO 00:17:38.889 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.TrO 00:17:39.149 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.K82 ]] 00:17:39.149 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.K82 00:17:39.149 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.149 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.149 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.149 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.K82 00:17:39.149 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.K82 00:17:39.149 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:39.149 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.k6x 00:17:39.149 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.149 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.149 16:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.149 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.k6x 00:17:39.149 16:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.k6x 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.410 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.670 00:17:39.670 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.670 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.670 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.931 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.931 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.931 16:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.931 16:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.931 16:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.931 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.931 { 00:17:39.931 "cntlid": 1, 00:17:39.931 "qid": 0, 00:17:39.931 "state": "enabled", 00:17:39.931 "thread": "nvmf_tgt_poll_group_000", 00:17:39.931 "listen_address": { 00:17:39.931 "trtype": "TCP", 00:17:39.931 "adrfam": "IPv4", 00:17:39.931 "traddr": "10.0.0.2", 00:17:39.931 "trsvcid": "4420" 00:17:39.931 }, 00:17:39.931 "peer_address": { 00:17:39.931 "trtype": "TCP", 00:17:39.931 "adrfam": "IPv4", 00:17:39.931 "traddr": "10.0.0.1", 00:17:39.931 "trsvcid": "48478" 00:17:39.931 }, 00:17:39.931 "auth": { 00:17:39.931 "state": "completed", 00:17:39.931 "digest": "sha256", 00:17:39.931 "dhgroup": "null" 00:17:39.931 } 00:17:39.931 } 00:17:39.931 ]' 00:17:39.931 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.931 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.931 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.931 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:39.931 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.931 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.931 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.931 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.191 16:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.131 16:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.390 00:17:41.390 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.390 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.390 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.650 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.650 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.650 16:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.650 16:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.650 16:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.650 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.650 { 00:17:41.650 "cntlid": 3, 00:17:41.650 "qid": 0, 00:17:41.650 "state": "enabled", 00:17:41.650 "thread": "nvmf_tgt_poll_group_000", 00:17:41.650 "listen_address": { 00:17:41.650 "trtype": "TCP", 00:17:41.650 "adrfam": "IPv4", 00:17:41.650 "traddr": "10.0.0.2", 00:17:41.650 "trsvcid": "4420" 00:17:41.650 }, 00:17:41.650 "peer_address": { 00:17:41.650 "trtype": "TCP", 00:17:41.650 "adrfam": "IPv4", 00:17:41.650 "traddr": "10.0.0.1", 00:17:41.650 "trsvcid": "48506" 00:17:41.650 }, 00:17:41.650 "auth": { 00:17:41.650 "state": "completed", 00:17:41.650 "digest": "sha256", 00:17:41.650 "dhgroup": "null" 00:17:41.650 } 00:17:41.650 } 00:17:41.650 ]' 00:17:41.650 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.650 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.650 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.650 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:41.650 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.650 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.650 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.650 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.910 16:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.847 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.106 00:17:43.106 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.106 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.106 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.106 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.106 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.106 16:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.106 16:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.106 16:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.106 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.106 { 00:17:43.106 "cntlid": 5, 00:17:43.106 "qid": 0, 00:17:43.106 "state": "enabled", 00:17:43.106 "thread": "nvmf_tgt_poll_group_000", 00:17:43.106 "listen_address": { 00:17:43.106 "trtype": "TCP", 00:17:43.106 "adrfam": "IPv4", 00:17:43.106 "traddr": "10.0.0.2", 00:17:43.106 "trsvcid": "4420" 00:17:43.106 }, 00:17:43.106 "peer_address": { 00:17:43.106 "trtype": "TCP", 00:17:43.106 "adrfam": "IPv4", 00:17:43.106 "traddr": "10.0.0.1", 00:17:43.106 "trsvcid": "49318" 00:17:43.106 }, 00:17:43.106 "auth": { 00:17:43.106 "state": "completed", 00:17:43.106 "digest": "sha256", 00:17:43.106 "dhgroup": "null" 00:17:43.106 } 00:17:43.106 } 00:17:43.106 ]' 00:17:43.106 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.366 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.366 16:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.366 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:43.366 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.366 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.366 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.366 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.625 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:17:44.194 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.194 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.194 16:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.194 16:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.194 16:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.194 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.194 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:44.194 16:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:44.454 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:44.454 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.454 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.454 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:44.454 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:44.454 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.454 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:44.454 16:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.454 16:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.454 16:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.454 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.454 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.714 00:17:44.714 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.714 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.714 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.714 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.714 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.714 16:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.714 16:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.714 16:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.715 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.715 { 00:17:44.715 "cntlid": 7, 00:17:44.715 "qid": 0, 00:17:44.715 "state": "enabled", 00:17:44.715 "thread": "nvmf_tgt_poll_group_000", 00:17:44.715 "listen_address": { 00:17:44.715 "trtype": "TCP", 00:17:44.715 "adrfam": "IPv4", 00:17:44.715 "traddr": "10.0.0.2", 00:17:44.715 "trsvcid": "4420" 00:17:44.715 }, 00:17:44.715 "peer_address": { 00:17:44.715 "trtype": "TCP", 00:17:44.715 "adrfam": "IPv4", 00:17:44.715 "traddr": "10.0.0.1", 00:17:44.715 "trsvcid": "49326" 00:17:44.715 }, 00:17:44.715 "auth": { 00:17:44.715 "state": "completed", 00:17:44.715 "digest": "sha256", 00:17:44.715 "dhgroup": "null" 00:17:44.715 } 00:17:44.715 } 00:17:44.715 ]' 00:17:44.715 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.715 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.715 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.975 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:44.975 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.975 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.975 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.975 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.975 16:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.914 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.174 00:17:46.174 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.174 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.174 16:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.434 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.434 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.434 16:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.434 16:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.434 16:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.434 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.434 { 00:17:46.434 "cntlid": 9, 00:17:46.434 "qid": 0, 00:17:46.434 "state": "enabled", 00:17:46.434 "thread": "nvmf_tgt_poll_group_000", 00:17:46.434 "listen_address": { 00:17:46.434 "trtype": "TCP", 00:17:46.434 "adrfam": "IPv4", 00:17:46.434 "traddr": "10.0.0.2", 00:17:46.434 "trsvcid": "4420" 00:17:46.434 }, 00:17:46.435 "peer_address": { 00:17:46.435 "trtype": "TCP", 00:17:46.435 "adrfam": "IPv4", 00:17:46.435 "traddr": "10.0.0.1", 00:17:46.435 "trsvcid": "49342" 00:17:46.435 }, 00:17:46.435 "auth": { 00:17:46.435 "state": "completed", 00:17:46.435 "digest": "sha256", 00:17:46.435 "dhgroup": "ffdhe2048" 00:17:46.435 } 00:17:46.435 } 00:17:46.435 ]' 00:17:46.435 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.435 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.435 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.435 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.435 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.435 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.435 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.435 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.694 16:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.633 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.894 00:17:47.894 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.894 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.894 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.154 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.154 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.154 16:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.154 16:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.154 16:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.154 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.154 { 00:17:48.154 "cntlid": 11, 00:17:48.154 "qid": 0, 00:17:48.154 "state": "enabled", 00:17:48.154 "thread": "nvmf_tgt_poll_group_000", 00:17:48.154 "listen_address": { 00:17:48.154 "trtype": "TCP", 00:17:48.154 "adrfam": "IPv4", 00:17:48.154 "traddr": "10.0.0.2", 00:17:48.154 "trsvcid": "4420" 00:17:48.154 }, 00:17:48.154 "peer_address": { 00:17:48.154 "trtype": "TCP", 00:17:48.154 "adrfam": "IPv4", 00:17:48.154 "traddr": "10.0.0.1", 00:17:48.154 "trsvcid": "49370" 00:17:48.154 }, 00:17:48.154 "auth": { 00:17:48.154 "state": "completed", 00:17:48.154 "digest": "sha256", 00:17:48.154 "dhgroup": "ffdhe2048" 00:17:48.154 } 00:17:48.154 } 00:17:48.154 ]' 00:17:48.154 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.154 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.154 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.154 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.154 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.154 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.154 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.154 16:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.414 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:17:49.033 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.033 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.033 16:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.033 16:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.033 16:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.033 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.033 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:49.033 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:49.306 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:49.306 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.306 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.306 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:49.306 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:49.306 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.306 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.306 16:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.306 16:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.306 16:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.306 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.306 16:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.567 00:17:49.567 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.567 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.567 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.567 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.567 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.567 16:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.567 16:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.567 16:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.567 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.567 { 00:17:49.567 "cntlid": 13, 00:17:49.567 "qid": 0, 00:17:49.567 "state": "enabled", 00:17:49.567 "thread": "nvmf_tgt_poll_group_000", 00:17:49.567 "listen_address": { 00:17:49.567 "trtype": "TCP", 00:17:49.567 "adrfam": "IPv4", 00:17:49.567 "traddr": "10.0.0.2", 00:17:49.567 "trsvcid": "4420" 00:17:49.567 }, 00:17:49.567 "peer_address": { 00:17:49.567 "trtype": "TCP", 00:17:49.567 "adrfam": "IPv4", 00:17:49.567 "traddr": "10.0.0.1", 00:17:49.567 "trsvcid": "49398" 00:17:49.567 }, 00:17:49.567 "auth": { 00:17:49.567 "state": "completed", 00:17:49.567 "digest": "sha256", 00:17:49.567 "dhgroup": "ffdhe2048" 00:17:49.567 } 00:17:49.567 } 00:17:49.567 ]' 00:17:49.567 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.829 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.829 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.829 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:49.829 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.829 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.829 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.829 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.089 16:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:17:50.659 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.659 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.659 16:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.659 16:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.659 16:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.659 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.659 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:50.659 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:50.919 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:50.919 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.919 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.919 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:50.919 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:50.919 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.919 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:50.919 16:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.919 16:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.919 16:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.919 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.919 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:51.180 00:17:51.180 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.180 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.180 16:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.441 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.441 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.441 16:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.441 16:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.441 16:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.441 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.441 { 00:17:51.441 "cntlid": 15, 00:17:51.441 "qid": 0, 00:17:51.441 "state": "enabled", 00:17:51.441 "thread": "nvmf_tgt_poll_group_000", 00:17:51.441 "listen_address": { 00:17:51.441 "trtype": "TCP", 00:17:51.441 "adrfam": "IPv4", 00:17:51.441 "traddr": "10.0.0.2", 00:17:51.441 "trsvcid": "4420" 00:17:51.441 }, 00:17:51.441 "peer_address": { 00:17:51.441 "trtype": "TCP", 00:17:51.441 "adrfam": "IPv4", 00:17:51.441 "traddr": "10.0.0.1", 00:17:51.441 "trsvcid": "49424" 00:17:51.441 }, 00:17:51.441 "auth": { 00:17:51.441 "state": "completed", 00:17:51.441 "digest": "sha256", 00:17:51.441 "dhgroup": "ffdhe2048" 00:17:51.441 } 00:17:51.441 } 00:17:51.441 ]' 00:17:51.441 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.441 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.441 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.441 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:51.441 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.441 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.441 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.441 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.701 16:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:17:52.272 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.272 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.272 16:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.273 16:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.533 16:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.533 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.533 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.533 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:52.533 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:52.533 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:52.533 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.533 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.533 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:52.533 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:52.533 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.533 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.533 16:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.533 16:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.533 16:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.533 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.533 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.793 00:17:52.793 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.793 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.794 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.054 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.054 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.054 16:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.054 16:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.055 16:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.055 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.055 { 00:17:53.055 "cntlid": 17, 00:17:53.055 "qid": 0, 00:17:53.055 "state": "enabled", 00:17:53.055 "thread": "nvmf_tgt_poll_group_000", 00:17:53.055 "listen_address": { 00:17:53.055 "trtype": "TCP", 00:17:53.055 "adrfam": "IPv4", 00:17:53.055 "traddr": "10.0.0.2", 00:17:53.055 "trsvcid": "4420" 00:17:53.055 }, 00:17:53.055 "peer_address": { 00:17:53.055 "trtype": "TCP", 00:17:53.055 "adrfam": "IPv4", 00:17:53.055 "traddr": "10.0.0.1", 00:17:53.055 "trsvcid": "52264" 00:17:53.055 }, 00:17:53.055 "auth": { 00:17:53.055 "state": "completed", 00:17:53.055 "digest": "sha256", 00:17:53.055 "dhgroup": "ffdhe3072" 00:17:53.055 } 00:17:53.055 } 00:17:53.055 ]' 00:17:53.055 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.055 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.055 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.055 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:53.055 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.055 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.055 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.055 16:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.316 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.257 16:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.518 00:17:54.518 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.518 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.518 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.518 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.518 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.518 16:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.518 16:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.778 16:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.778 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.778 { 00:17:54.778 "cntlid": 19, 00:17:54.778 "qid": 0, 00:17:54.778 "state": "enabled", 00:17:54.778 "thread": "nvmf_tgt_poll_group_000", 00:17:54.778 "listen_address": { 00:17:54.778 "trtype": "TCP", 00:17:54.778 "adrfam": "IPv4", 00:17:54.778 "traddr": "10.0.0.2", 00:17:54.778 "trsvcid": "4420" 00:17:54.778 }, 00:17:54.778 "peer_address": { 00:17:54.778 "trtype": "TCP", 00:17:54.778 "adrfam": "IPv4", 00:17:54.778 "traddr": "10.0.0.1", 00:17:54.778 "trsvcid": "52300" 00:17:54.778 }, 00:17:54.779 "auth": { 00:17:54.779 "state": "completed", 00:17:54.779 "digest": "sha256", 00:17:54.779 "dhgroup": "ffdhe3072" 00:17:54.779 } 00:17:54.779 } 00:17:54.779 ]' 00:17:54.779 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.779 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.779 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.779 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.779 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.779 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.779 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.779 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.039 16:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:17:55.610 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.610 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.610 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.610 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.610 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.610 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.610 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:55.610 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:55.870 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:55.870 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.870 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.870 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:55.870 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:55.870 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.870 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.870 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.870 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.870 16:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.870 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.870 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.129 00:17:56.129 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.129 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.129 16:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.390 16:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.390 16:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.390 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.390 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.390 16:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.390 16:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.390 { 00:17:56.390 "cntlid": 21, 00:17:56.390 "qid": 0, 00:17:56.390 "state": "enabled", 00:17:56.390 "thread": "nvmf_tgt_poll_group_000", 00:17:56.390 "listen_address": { 00:17:56.390 "trtype": "TCP", 00:17:56.390 "adrfam": "IPv4", 00:17:56.390 "traddr": "10.0.0.2", 00:17:56.390 "trsvcid": "4420" 00:17:56.390 }, 00:17:56.390 "peer_address": { 00:17:56.390 "trtype": "TCP", 00:17:56.390 "adrfam": "IPv4", 00:17:56.390 "traddr": "10.0.0.1", 00:17:56.390 "trsvcid": "52322" 00:17:56.390 }, 00:17:56.390 "auth": { 00:17:56.390 "state": "completed", 00:17:56.390 "digest": "sha256", 00:17:56.390 "dhgroup": "ffdhe3072" 00:17:56.390 } 00:17:56.390 } 00:17:56.390 ]' 00:17:56.390 16:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.390 16:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.390 16:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.390 16:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.390 16:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.390 16:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.390 16:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.390 16:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.650 16:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:17:57.597 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.597 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.597 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.597 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.597 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.597 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.597 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:57.597 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:57.597 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:57.597 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.597 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:57.597 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:57.597 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:57.597 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.597 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:57.597 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.597 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.598 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.598 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.598 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.858 00:17:57.858 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.858 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.858 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.858 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.858 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.858 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.858 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.118 16:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.118 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.118 { 00:17:58.118 "cntlid": 23, 00:17:58.118 "qid": 0, 00:17:58.118 "state": "enabled", 00:17:58.118 "thread": "nvmf_tgt_poll_group_000", 00:17:58.118 "listen_address": { 00:17:58.118 "trtype": "TCP", 00:17:58.118 "adrfam": "IPv4", 00:17:58.118 "traddr": "10.0.0.2", 00:17:58.118 "trsvcid": "4420" 00:17:58.118 }, 00:17:58.118 "peer_address": { 00:17:58.118 "trtype": "TCP", 00:17:58.118 "adrfam": "IPv4", 00:17:58.118 "traddr": "10.0.0.1", 00:17:58.118 "trsvcid": "52348" 00:17:58.118 }, 00:17:58.118 "auth": { 00:17:58.118 "state": "completed", 00:17:58.118 "digest": "sha256", 00:17:58.118 "dhgroup": "ffdhe3072" 00:17:58.118 } 00:17:58.118 } 00:17:58.118 ]' 00:17:58.118 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.118 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.118 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.118 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:58.118 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.118 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.118 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.118 16:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.379 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:17:58.950 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.950 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.950 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.950 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.950 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.950 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.950 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.950 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:58.950 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:59.210 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:59.210 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.210 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:59.210 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:59.210 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:59.210 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.210 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.210 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.210 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.210 16:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.211 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.211 16:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.471 00:17:59.471 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.471 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.471 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.731 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.731 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.731 16:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.731 16:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.731 16:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.731 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.731 { 00:17:59.731 "cntlid": 25, 00:17:59.731 "qid": 0, 00:17:59.731 "state": "enabled", 00:17:59.731 "thread": "nvmf_tgt_poll_group_000", 00:17:59.731 "listen_address": { 00:17:59.731 "trtype": "TCP", 00:17:59.731 "adrfam": "IPv4", 00:17:59.731 "traddr": "10.0.0.2", 00:17:59.731 "trsvcid": "4420" 00:17:59.731 }, 00:17:59.731 "peer_address": { 00:17:59.731 "trtype": "TCP", 00:17:59.731 "adrfam": "IPv4", 00:17:59.731 "traddr": "10.0.0.1", 00:17:59.731 "trsvcid": "52360" 00:17:59.731 }, 00:17:59.731 "auth": { 00:17:59.731 "state": "completed", 00:17:59.731 "digest": "sha256", 00:17:59.731 "dhgroup": "ffdhe4096" 00:17:59.731 } 00:17:59.731 } 00:17:59.731 ]' 00:17:59.731 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.731 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.731 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.731 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.731 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.731 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.731 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.731 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.990 16:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:18:00.571 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.831 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.090 00:18:01.090 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.090 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.090 16:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.349 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.349 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.349 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.349 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.349 16:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.349 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.349 { 00:18:01.349 "cntlid": 27, 00:18:01.349 "qid": 0, 00:18:01.349 "state": "enabled", 00:18:01.349 "thread": "nvmf_tgt_poll_group_000", 00:18:01.349 "listen_address": { 00:18:01.349 "trtype": "TCP", 00:18:01.349 "adrfam": "IPv4", 00:18:01.349 "traddr": "10.0.0.2", 00:18:01.349 "trsvcid": "4420" 00:18:01.349 }, 00:18:01.349 "peer_address": { 00:18:01.349 "trtype": "TCP", 00:18:01.349 "adrfam": "IPv4", 00:18:01.349 "traddr": "10.0.0.1", 00:18:01.350 "trsvcid": "52382" 00:18:01.350 }, 00:18:01.350 "auth": { 00:18:01.350 "state": "completed", 00:18:01.350 "digest": "sha256", 00:18:01.350 "dhgroup": "ffdhe4096" 00:18:01.350 } 00:18:01.350 } 00:18:01.350 ]' 00:18:01.350 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.350 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.350 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.350 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:01.350 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.350 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.350 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.350 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.610 16:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:18:02.551 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.552 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.813 00:18:02.813 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.813 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.813 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.074 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.074 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.074 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.074 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.074 16:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.074 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.074 { 00:18:03.074 "cntlid": 29, 00:18:03.074 "qid": 0, 00:18:03.074 "state": "enabled", 00:18:03.074 "thread": "nvmf_tgt_poll_group_000", 00:18:03.074 "listen_address": { 00:18:03.074 "trtype": "TCP", 00:18:03.074 "adrfam": "IPv4", 00:18:03.074 "traddr": "10.0.0.2", 00:18:03.074 "trsvcid": "4420" 00:18:03.074 }, 00:18:03.074 "peer_address": { 00:18:03.074 "trtype": "TCP", 00:18:03.074 "adrfam": "IPv4", 00:18:03.074 "traddr": "10.0.0.1", 00:18:03.074 "trsvcid": "44482" 00:18:03.074 }, 00:18:03.074 "auth": { 00:18:03.074 "state": "completed", 00:18:03.074 "digest": "sha256", 00:18:03.074 "dhgroup": "ffdhe4096" 00:18:03.074 } 00:18:03.074 } 00:18:03.074 ]' 00:18:03.074 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.074 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.074 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.074 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.074 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.074 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.074 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.074 16:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.338 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:18:03.972 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.972 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.972 16:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.972 16:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.233 16:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.233 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.233 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:04.233 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:04.233 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:04.233 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.233 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.233 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:04.234 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:04.234 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.234 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:04.234 16:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.234 16:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.234 16:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.234 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:04.234 16:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:04.493 00:18:04.493 16:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.493 16:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.493 16:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.753 16:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.753 16:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.753 16:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.753 16:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.753 16:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.753 16:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.753 { 00:18:04.753 "cntlid": 31, 00:18:04.753 "qid": 0, 00:18:04.753 "state": "enabled", 00:18:04.753 "thread": "nvmf_tgt_poll_group_000", 00:18:04.753 "listen_address": { 00:18:04.753 "trtype": "TCP", 00:18:04.753 "adrfam": "IPv4", 00:18:04.753 "traddr": "10.0.0.2", 00:18:04.753 "trsvcid": "4420" 00:18:04.753 }, 00:18:04.753 "peer_address": { 00:18:04.753 "trtype": "TCP", 00:18:04.753 "adrfam": "IPv4", 00:18:04.753 "traddr": "10.0.0.1", 00:18:04.753 "trsvcid": "44504" 00:18:04.753 }, 00:18:04.753 "auth": { 00:18:04.753 "state": "completed", 00:18:04.753 "digest": "sha256", 00:18:04.753 "dhgroup": "ffdhe4096" 00:18:04.753 } 00:18:04.753 } 00:18:04.753 ]' 00:18:04.753 16:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.753 16:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.753 16:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.753 16:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:04.753 16:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.753 16:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.753 16:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.753 16:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.014 16:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.954 16:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.214 00:18:06.214 16:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.214 16:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.214 16:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.475 16:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.475 16:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.475 16:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.475 16:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.475 16:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.475 16:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.475 { 00:18:06.475 "cntlid": 33, 00:18:06.475 "qid": 0, 00:18:06.475 "state": "enabled", 00:18:06.475 "thread": "nvmf_tgt_poll_group_000", 00:18:06.475 "listen_address": { 00:18:06.475 "trtype": "TCP", 00:18:06.475 "adrfam": "IPv4", 00:18:06.475 "traddr": "10.0.0.2", 00:18:06.475 "trsvcid": "4420" 00:18:06.475 }, 00:18:06.475 "peer_address": { 00:18:06.475 "trtype": "TCP", 00:18:06.475 "adrfam": "IPv4", 00:18:06.475 "traddr": "10.0.0.1", 00:18:06.475 "trsvcid": "44540" 00:18:06.475 }, 00:18:06.475 "auth": { 00:18:06.475 "state": "completed", 00:18:06.475 "digest": "sha256", 00:18:06.475 "dhgroup": "ffdhe6144" 00:18:06.475 } 00:18:06.475 } 00:18:06.475 ]' 00:18:06.475 16:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.475 16:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.475 16:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.475 16:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.475 16:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.475 16:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.475 16:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.475 16:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.736 16:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.678 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.938 00:18:07.938 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.938 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.938 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.199 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.199 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.199 16:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.199 16:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.199 16:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.199 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.199 { 00:18:08.199 "cntlid": 35, 00:18:08.199 "qid": 0, 00:18:08.199 "state": "enabled", 00:18:08.199 "thread": "nvmf_tgt_poll_group_000", 00:18:08.199 "listen_address": { 00:18:08.199 "trtype": "TCP", 00:18:08.199 "adrfam": "IPv4", 00:18:08.199 "traddr": "10.0.0.2", 00:18:08.199 "trsvcid": "4420" 00:18:08.199 }, 00:18:08.199 "peer_address": { 00:18:08.199 "trtype": "TCP", 00:18:08.199 "adrfam": "IPv4", 00:18:08.199 "traddr": "10.0.0.1", 00:18:08.199 "trsvcid": "44558" 00:18:08.199 }, 00:18:08.199 "auth": { 00:18:08.199 "state": "completed", 00:18:08.200 "digest": "sha256", 00:18:08.200 "dhgroup": "ffdhe6144" 00:18:08.200 } 00:18:08.200 } 00:18:08.200 ]' 00:18:08.200 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.200 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.200 16:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.200 16:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.200 16:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.460 16:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.460 16:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.460 16:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.460 16:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:18:09.401 16:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.402 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.972 00:18:09.972 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.972 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.972 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.972 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.972 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.972 16:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.972 16:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.972 16:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.972 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.972 { 00:18:09.972 "cntlid": 37, 00:18:09.972 "qid": 0, 00:18:09.972 "state": "enabled", 00:18:09.972 "thread": "nvmf_tgt_poll_group_000", 00:18:09.972 "listen_address": { 00:18:09.972 "trtype": "TCP", 00:18:09.972 "adrfam": "IPv4", 00:18:09.972 "traddr": "10.0.0.2", 00:18:09.972 "trsvcid": "4420" 00:18:09.972 }, 00:18:09.972 "peer_address": { 00:18:09.972 "trtype": "TCP", 00:18:09.972 "adrfam": "IPv4", 00:18:09.972 "traddr": "10.0.0.1", 00:18:09.972 "trsvcid": "44602" 00:18:09.972 }, 00:18:09.972 "auth": { 00:18:09.972 "state": "completed", 00:18:09.972 "digest": "sha256", 00:18:09.972 "dhgroup": "ffdhe6144" 00:18:09.972 } 00:18:09.972 } 00:18:09.972 ]' 00:18:09.972 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.972 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.972 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.234 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.234 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.234 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.234 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.234 16:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.234 16:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:18:11.180 16:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.180 16:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.180 16:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.180 16:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.180 16:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.180 16:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.180 16:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:11.180 16:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:11.180 16:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:11.180 16:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.180 16:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.180 16:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:11.180 16:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:11.180 16:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.180 16:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:11.180 16:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.180 16:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.181 16:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.181 16:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.181 16:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.752 00:18:11.752 16:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.752 16:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.752 16:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.752 16:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.752 16:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.752 16:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.752 16:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.752 16:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.752 16:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.752 { 00:18:11.752 "cntlid": 39, 00:18:11.752 "qid": 0, 00:18:11.752 "state": "enabled", 00:18:11.752 "thread": "nvmf_tgt_poll_group_000", 00:18:11.752 "listen_address": { 00:18:11.752 "trtype": "TCP", 00:18:11.752 "adrfam": "IPv4", 00:18:11.752 "traddr": "10.0.0.2", 00:18:11.752 "trsvcid": "4420" 00:18:11.752 }, 00:18:11.752 "peer_address": { 00:18:11.752 "trtype": "TCP", 00:18:11.752 "adrfam": "IPv4", 00:18:11.752 "traddr": "10.0.0.1", 00:18:11.752 "trsvcid": "44638" 00:18:11.752 }, 00:18:11.752 "auth": { 00:18:11.752 "state": "completed", 00:18:11.752 "digest": "sha256", 00:18:11.752 "dhgroup": "ffdhe6144" 00:18:11.752 } 00:18:11.752 } 00:18:11.752 ]' 00:18:11.752 16:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.752 16:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.752 16:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.012 16:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.012 16:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.012 16:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.012 16:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.012 16:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.012 16:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.953 16:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.524 00:18:13.524 16:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.524 16:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.524 16:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.784 16:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.784 16:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.784 16:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.784 16:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.784 16:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.784 16:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.784 { 00:18:13.784 "cntlid": 41, 00:18:13.784 "qid": 0, 00:18:13.784 "state": "enabled", 00:18:13.784 "thread": "nvmf_tgt_poll_group_000", 00:18:13.784 "listen_address": { 00:18:13.784 "trtype": "TCP", 00:18:13.784 "adrfam": "IPv4", 00:18:13.784 "traddr": "10.0.0.2", 00:18:13.784 "trsvcid": "4420" 00:18:13.784 }, 00:18:13.784 "peer_address": { 00:18:13.784 "trtype": "TCP", 00:18:13.784 "adrfam": "IPv4", 00:18:13.784 "traddr": "10.0.0.1", 00:18:13.784 "trsvcid": "53994" 00:18:13.784 }, 00:18:13.784 "auth": { 00:18:13.784 "state": "completed", 00:18:13.784 "digest": "sha256", 00:18:13.784 "dhgroup": "ffdhe8192" 00:18:13.784 } 00:18:13.784 } 00:18:13.784 ]' 00:18:13.784 16:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.784 16:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.784 16:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.784 16:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.784 16:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.784 16:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.784 16:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.784 16:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.044 16:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.986 16:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.559 00:18:15.559 16:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.559 16:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.559 16:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.559 16:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.559 16:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.559 16:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.559 16:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.559 16:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.559 16:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.559 { 00:18:15.559 "cntlid": 43, 00:18:15.559 "qid": 0, 00:18:15.559 "state": "enabled", 00:18:15.559 "thread": "nvmf_tgt_poll_group_000", 00:18:15.559 "listen_address": { 00:18:15.559 "trtype": "TCP", 00:18:15.559 "adrfam": "IPv4", 00:18:15.559 "traddr": "10.0.0.2", 00:18:15.559 "trsvcid": "4420" 00:18:15.559 }, 00:18:15.559 "peer_address": { 00:18:15.559 "trtype": "TCP", 00:18:15.559 "adrfam": "IPv4", 00:18:15.559 "traddr": "10.0.0.1", 00:18:15.559 "trsvcid": "54010" 00:18:15.559 }, 00:18:15.559 "auth": { 00:18:15.559 "state": "completed", 00:18:15.559 "digest": "sha256", 00:18:15.559 "dhgroup": "ffdhe8192" 00:18:15.559 } 00:18:15.559 } 00:18:15.559 ]' 00:18:15.559 16:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.820 16:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.820 16:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.820 16:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.820 16:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.820 16:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.820 16:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.820 16:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.079 16:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:18:16.649 16:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.649 16:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.650 16:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.650 16:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.650 16:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.650 16:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.650 16:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:16.650 16:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:16.910 16:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:16.910 16:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.910 16:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.910 16:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:16.910 16:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:16.910 16:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.910 16:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.910 16:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.910 16:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.910 16:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.910 16:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.910 16:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.482 00:18:17.482 16:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.482 16:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.482 16:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.482 16:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.482 16:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.482 16:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.482 16:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.482 16:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.482 16:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.482 { 00:18:17.482 "cntlid": 45, 00:18:17.482 "qid": 0, 00:18:17.482 "state": "enabled", 00:18:17.482 "thread": "nvmf_tgt_poll_group_000", 00:18:17.482 "listen_address": { 00:18:17.482 "trtype": "TCP", 00:18:17.482 "adrfam": "IPv4", 00:18:17.482 "traddr": "10.0.0.2", 00:18:17.482 "trsvcid": "4420" 00:18:17.482 }, 00:18:17.482 "peer_address": { 00:18:17.482 "trtype": "TCP", 00:18:17.482 "adrfam": "IPv4", 00:18:17.482 "traddr": "10.0.0.1", 00:18:17.482 "trsvcid": "54044" 00:18:17.482 }, 00:18:17.482 "auth": { 00:18:17.482 "state": "completed", 00:18:17.482 "digest": "sha256", 00:18:17.482 "dhgroup": "ffdhe8192" 00:18:17.482 } 00:18:17.482 } 00:18:17.482 ]' 00:18:17.482 16:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.743 16:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.743 16:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.743 16:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.743 16:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.743 16:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.743 16:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.743 16:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.003 16:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:18:18.574 16:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.574 16:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.574 16:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.574 16:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.574 16:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.574 16:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.574 16:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:18.574 16:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:18.870 16:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:18.870 16:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.870 16:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:18.870 16:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:18.870 16:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:18.870 16:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.870 16:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:18.870 16:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.870 16:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.870 16:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.870 16:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.870 16:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.456 00:18:19.456 16:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.456 16:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.456 16:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.456 16:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.456 16:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.456 16:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.456 16:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.456 16:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.456 16:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.456 { 00:18:19.456 "cntlid": 47, 00:18:19.456 "qid": 0, 00:18:19.456 "state": "enabled", 00:18:19.456 "thread": "nvmf_tgt_poll_group_000", 00:18:19.456 "listen_address": { 00:18:19.456 "trtype": "TCP", 00:18:19.456 "adrfam": "IPv4", 00:18:19.456 "traddr": "10.0.0.2", 00:18:19.456 "trsvcid": "4420" 00:18:19.456 }, 00:18:19.456 "peer_address": { 00:18:19.456 "trtype": "TCP", 00:18:19.456 "adrfam": "IPv4", 00:18:19.456 "traddr": "10.0.0.1", 00:18:19.456 "trsvcid": "54074" 00:18:19.456 }, 00:18:19.456 "auth": { 00:18:19.456 "state": "completed", 00:18:19.456 "digest": "sha256", 00:18:19.456 "dhgroup": "ffdhe8192" 00:18:19.456 } 00:18:19.456 } 00:18:19.456 ]' 00:18:19.456 16:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.456 16:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.456 16:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.716 16:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.716 16:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.716 16:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.716 16:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.716 16:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.716 16:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:18:20.653 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.653 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.653 16:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.653 16:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.653 16:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.653 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:20.653 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.653 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.653 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:20.653 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:20.653 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:20.653 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.653 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:20.653 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:20.653 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:20.653 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.654 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.654 16:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.654 16:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.654 16:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.654 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.654 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.914 00:18:20.914 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.914 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.914 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.197 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.197 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.197 16:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.197 16:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.197 16:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.197 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.197 { 00:18:21.197 "cntlid": 49, 00:18:21.197 "qid": 0, 00:18:21.197 "state": "enabled", 00:18:21.197 "thread": "nvmf_tgt_poll_group_000", 00:18:21.197 "listen_address": { 00:18:21.197 "trtype": "TCP", 00:18:21.197 "adrfam": "IPv4", 00:18:21.197 "traddr": "10.0.0.2", 00:18:21.197 "trsvcid": "4420" 00:18:21.197 }, 00:18:21.197 "peer_address": { 00:18:21.197 "trtype": "TCP", 00:18:21.197 "adrfam": "IPv4", 00:18:21.197 "traddr": "10.0.0.1", 00:18:21.197 "trsvcid": "54094" 00:18:21.197 }, 00:18:21.197 "auth": { 00:18:21.197 "state": "completed", 00:18:21.197 "digest": "sha384", 00:18:21.197 "dhgroup": "null" 00:18:21.197 } 00:18:21.197 } 00:18:21.197 ]' 00:18:21.197 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.198 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.198 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.198 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:21.198 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.198 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.198 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.198 16:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.458 16:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:18:22.027 16:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.287 16:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.287 16:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.287 16:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.287 16:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.287 16:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.287 16:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:22.287 16:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:22.287 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:22.287 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.287 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:22.287 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:22.287 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:22.287 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.287 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.287 16:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.287 16:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.287 16:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.287 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.287 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.568 00:18:22.568 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.568 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.568 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.826 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.826 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.826 16:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.826 16:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.826 16:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.826 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.826 { 00:18:22.826 "cntlid": 51, 00:18:22.826 "qid": 0, 00:18:22.826 "state": "enabled", 00:18:22.826 "thread": "nvmf_tgt_poll_group_000", 00:18:22.826 "listen_address": { 00:18:22.826 "trtype": "TCP", 00:18:22.826 "adrfam": "IPv4", 00:18:22.826 "traddr": "10.0.0.2", 00:18:22.826 "trsvcid": "4420" 00:18:22.826 }, 00:18:22.826 "peer_address": { 00:18:22.826 "trtype": "TCP", 00:18:22.826 "adrfam": "IPv4", 00:18:22.826 "traddr": "10.0.0.1", 00:18:22.826 "trsvcid": "38344" 00:18:22.826 }, 00:18:22.826 "auth": { 00:18:22.826 "state": "completed", 00:18:22.826 "digest": "sha384", 00:18:22.826 "dhgroup": "null" 00:18:22.826 } 00:18:22.826 } 00:18:22.826 ]' 00:18:22.826 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.826 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.826 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.826 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:22.826 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.826 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.826 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.826 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.086 16:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:18:23.656 16:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.656 16:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.656 16:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.656 16:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.915 16:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.915 16:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.915 16:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:23.915 16:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:23.915 16:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:23.915 16:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.915 16:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.915 16:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:23.915 16:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:23.915 16:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.915 16:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.915 16:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.915 16:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.915 16:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.915 16:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.915 16:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.175 00:18:24.175 16:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.175 16:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.175 16:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.436 16:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.436 16:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.436 16:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.436 16:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.436 16:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.436 16:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.436 { 00:18:24.436 "cntlid": 53, 00:18:24.436 "qid": 0, 00:18:24.436 "state": "enabled", 00:18:24.436 "thread": "nvmf_tgt_poll_group_000", 00:18:24.436 "listen_address": { 00:18:24.436 "trtype": "TCP", 00:18:24.436 "adrfam": "IPv4", 00:18:24.436 "traddr": "10.0.0.2", 00:18:24.436 "trsvcid": "4420" 00:18:24.436 }, 00:18:24.436 "peer_address": { 00:18:24.436 "trtype": "TCP", 00:18:24.436 "adrfam": "IPv4", 00:18:24.436 "traddr": "10.0.0.1", 00:18:24.436 "trsvcid": "38376" 00:18:24.436 }, 00:18:24.436 "auth": { 00:18:24.436 "state": "completed", 00:18:24.436 "digest": "sha384", 00:18:24.436 "dhgroup": "null" 00:18:24.436 } 00:18:24.436 } 00:18:24.436 ]' 00:18:24.436 16:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.436 16:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.436 16:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.436 16:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:24.436 16:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.436 16:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.436 16:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.436 16:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.696 16:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:18:25.266 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.266 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.266 16:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.266 16:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.266 16:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.266 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.266 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:25.266 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:25.526 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:25.526 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.526 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.526 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:25.526 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:25.526 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.526 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:25.526 16:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.526 16:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.526 16:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.526 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.526 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.787 00:18:25.787 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.787 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.787 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.787 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.787 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.787 16:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.787 16:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.046 16:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.046 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.046 { 00:18:26.046 "cntlid": 55, 00:18:26.046 "qid": 0, 00:18:26.046 "state": "enabled", 00:18:26.046 "thread": "nvmf_tgt_poll_group_000", 00:18:26.046 "listen_address": { 00:18:26.046 "trtype": "TCP", 00:18:26.046 "adrfam": "IPv4", 00:18:26.046 "traddr": "10.0.0.2", 00:18:26.046 "trsvcid": "4420" 00:18:26.046 }, 00:18:26.046 "peer_address": { 00:18:26.046 "trtype": "TCP", 00:18:26.046 "adrfam": "IPv4", 00:18:26.046 "traddr": "10.0.0.1", 00:18:26.046 "trsvcid": "38402" 00:18:26.046 }, 00:18:26.046 "auth": { 00:18:26.046 "state": "completed", 00:18:26.046 "digest": "sha384", 00:18:26.046 "dhgroup": "null" 00:18:26.046 } 00:18:26.046 } 00:18:26.046 ]' 00:18:26.046 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.046 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.046 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.046 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:26.046 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.046 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.046 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.046 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.306 16:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:18:26.875 16:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.875 16:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.875 16:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.875 16:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.875 16:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.875 16:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.875 16:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.875 16:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:26.875 16:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:27.136 16:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:27.136 16:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.136 16:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.136 16:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:27.136 16:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:27.136 16:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.136 16:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.136 16:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.136 16:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.136 16:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.136 16:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.136 16:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.397 00:18:27.397 16:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.397 16:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.397 16:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.397 16:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.397 16:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.397 16:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.397 16:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.397 16:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.397 16:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.397 { 00:18:27.397 "cntlid": 57, 00:18:27.397 "qid": 0, 00:18:27.397 "state": "enabled", 00:18:27.397 "thread": "nvmf_tgt_poll_group_000", 00:18:27.397 "listen_address": { 00:18:27.397 "trtype": "TCP", 00:18:27.397 "adrfam": "IPv4", 00:18:27.397 "traddr": "10.0.0.2", 00:18:27.397 "trsvcid": "4420" 00:18:27.397 }, 00:18:27.397 "peer_address": { 00:18:27.397 "trtype": "TCP", 00:18:27.397 "adrfam": "IPv4", 00:18:27.397 "traddr": "10.0.0.1", 00:18:27.397 "trsvcid": "38436" 00:18:27.397 }, 00:18:27.397 "auth": { 00:18:27.397 "state": "completed", 00:18:27.397 "digest": "sha384", 00:18:27.397 "dhgroup": "ffdhe2048" 00:18:27.397 } 00:18:27.397 } 00:18:27.397 ]' 00:18:27.397 16:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.656 16:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.656 16:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.656 16:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:27.656 16:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.656 16:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.656 16:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.656 16:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.915 16:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:18:28.486 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.486 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.486 16:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.486 16:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.486 16:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.486 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.486 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:28.486 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:28.747 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:28.747 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.747 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:28.747 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:28.747 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:28.747 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.747 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.747 16:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.747 16:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.747 16:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.747 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.747 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.007 00:18:29.007 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.007 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.007 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.007 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.007 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.007 16:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.007 16:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.007 16:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.007 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.007 { 00:18:29.007 "cntlid": 59, 00:18:29.007 "qid": 0, 00:18:29.007 "state": "enabled", 00:18:29.007 "thread": "nvmf_tgt_poll_group_000", 00:18:29.007 "listen_address": { 00:18:29.007 "trtype": "TCP", 00:18:29.007 "adrfam": "IPv4", 00:18:29.007 "traddr": "10.0.0.2", 00:18:29.007 "trsvcid": "4420" 00:18:29.007 }, 00:18:29.007 "peer_address": { 00:18:29.007 "trtype": "TCP", 00:18:29.007 "adrfam": "IPv4", 00:18:29.007 "traddr": "10.0.0.1", 00:18:29.007 "trsvcid": "38472" 00:18:29.007 }, 00:18:29.007 "auth": { 00:18:29.007 "state": "completed", 00:18:29.007 "digest": "sha384", 00:18:29.007 "dhgroup": "ffdhe2048" 00:18:29.007 } 00:18:29.007 } 00:18:29.007 ]' 00:18:29.268 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.268 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.268 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.268 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:29.268 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.268 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.268 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.268 16:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.528 16:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:18:30.098 16:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.098 16:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.098 16:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.098 16:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.098 16:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.098 16:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.098 16:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:30.098 16:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:30.358 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:30.358 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.358 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:30.358 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:30.358 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:30.358 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.358 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.358 16:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.358 16:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.358 16:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.358 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.358 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.618 00:18:30.618 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.618 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.618 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.618 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.618 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.618 16:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.618 16:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.618 16:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.618 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.618 { 00:18:30.618 "cntlid": 61, 00:18:30.618 "qid": 0, 00:18:30.618 "state": "enabled", 00:18:30.618 "thread": "nvmf_tgt_poll_group_000", 00:18:30.618 "listen_address": { 00:18:30.618 "trtype": "TCP", 00:18:30.618 "adrfam": "IPv4", 00:18:30.619 "traddr": "10.0.0.2", 00:18:30.619 "trsvcid": "4420" 00:18:30.619 }, 00:18:30.619 "peer_address": { 00:18:30.619 "trtype": "TCP", 00:18:30.619 "adrfam": "IPv4", 00:18:30.619 "traddr": "10.0.0.1", 00:18:30.619 "trsvcid": "38496" 00:18:30.619 }, 00:18:30.619 "auth": { 00:18:30.619 "state": "completed", 00:18:30.619 "digest": "sha384", 00:18:30.619 "dhgroup": "ffdhe2048" 00:18:30.619 } 00:18:30.619 } 00:18:30.619 ]' 00:18:30.619 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.878 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.878 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.878 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:30.878 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.878 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.878 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.879 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.879 16:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.818 16:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.078 00:18:32.078 16:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.078 16:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.078 16:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.338 16:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.339 16:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.339 16:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.339 16:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.339 16:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.339 16:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.339 { 00:18:32.339 "cntlid": 63, 00:18:32.339 "qid": 0, 00:18:32.339 "state": "enabled", 00:18:32.339 "thread": "nvmf_tgt_poll_group_000", 00:18:32.339 "listen_address": { 00:18:32.339 "trtype": "TCP", 00:18:32.339 "adrfam": "IPv4", 00:18:32.339 "traddr": "10.0.0.2", 00:18:32.339 "trsvcid": "4420" 00:18:32.339 }, 00:18:32.339 "peer_address": { 00:18:32.339 "trtype": "TCP", 00:18:32.339 "adrfam": "IPv4", 00:18:32.339 "traddr": "10.0.0.1", 00:18:32.339 "trsvcid": "38128" 00:18:32.339 }, 00:18:32.339 "auth": { 00:18:32.339 "state": "completed", 00:18:32.339 "digest": "sha384", 00:18:32.339 "dhgroup": "ffdhe2048" 00:18:32.339 } 00:18:32.339 } 00:18:32.339 ]' 00:18:32.339 16:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.339 16:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.339 16:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.339 16:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:32.339 16:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.339 16:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.339 16:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.339 16:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.599 16:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.577 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.860 00:18:33.860 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.860 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.860 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.860 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.860 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.860 16:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.860 16:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.860 16:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.860 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.860 { 00:18:33.860 "cntlid": 65, 00:18:33.860 "qid": 0, 00:18:33.860 "state": "enabled", 00:18:33.860 "thread": "nvmf_tgt_poll_group_000", 00:18:33.860 "listen_address": { 00:18:33.860 "trtype": "TCP", 00:18:33.860 "adrfam": "IPv4", 00:18:33.860 "traddr": "10.0.0.2", 00:18:33.860 "trsvcid": "4420" 00:18:33.860 }, 00:18:33.860 "peer_address": { 00:18:33.860 "trtype": "TCP", 00:18:33.860 "adrfam": "IPv4", 00:18:33.860 "traddr": "10.0.0.1", 00:18:33.860 "trsvcid": "38170" 00:18:33.860 }, 00:18:33.860 "auth": { 00:18:33.860 "state": "completed", 00:18:33.860 "digest": "sha384", 00:18:33.860 "dhgroup": "ffdhe3072" 00:18:33.860 } 00:18:33.860 } 00:18:33.860 ]' 00:18:33.860 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.860 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.121 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.121 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:34.121 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.121 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.121 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.121 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.121 16:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.064 16:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.325 00:18:35.325 16:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.325 16:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.325 16:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.586 16:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.586 16:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.586 16:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.586 16:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.586 16:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.586 16:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.586 { 00:18:35.586 "cntlid": 67, 00:18:35.586 "qid": 0, 00:18:35.586 "state": "enabled", 00:18:35.586 "thread": "nvmf_tgt_poll_group_000", 00:18:35.586 "listen_address": { 00:18:35.586 "trtype": "TCP", 00:18:35.586 "adrfam": "IPv4", 00:18:35.586 "traddr": "10.0.0.2", 00:18:35.586 "trsvcid": "4420" 00:18:35.586 }, 00:18:35.586 "peer_address": { 00:18:35.586 "trtype": "TCP", 00:18:35.586 "adrfam": "IPv4", 00:18:35.586 "traddr": "10.0.0.1", 00:18:35.586 "trsvcid": "38212" 00:18:35.586 }, 00:18:35.586 "auth": { 00:18:35.586 "state": "completed", 00:18:35.586 "digest": "sha384", 00:18:35.586 "dhgroup": "ffdhe3072" 00:18:35.586 } 00:18:35.586 } 00:18:35.586 ]' 00:18:35.586 16:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.586 16:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.586 16:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.586 16:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:35.586 16:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.586 16:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.586 16:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.586 16:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.846 16:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.789 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.051 00:18:37.051 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.051 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.051 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.330 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.330 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.330 16:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.330 16:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.330 16:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.330 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.330 { 00:18:37.330 "cntlid": 69, 00:18:37.330 "qid": 0, 00:18:37.330 "state": "enabled", 00:18:37.330 "thread": "nvmf_tgt_poll_group_000", 00:18:37.330 "listen_address": { 00:18:37.330 "trtype": "TCP", 00:18:37.330 "adrfam": "IPv4", 00:18:37.330 "traddr": "10.0.0.2", 00:18:37.330 "trsvcid": "4420" 00:18:37.330 }, 00:18:37.330 "peer_address": { 00:18:37.330 "trtype": "TCP", 00:18:37.330 "adrfam": "IPv4", 00:18:37.330 "traddr": "10.0.0.1", 00:18:37.330 "trsvcid": "38236" 00:18:37.330 }, 00:18:37.330 "auth": { 00:18:37.330 "state": "completed", 00:18:37.330 "digest": "sha384", 00:18:37.330 "dhgroup": "ffdhe3072" 00:18:37.330 } 00:18:37.330 } 00:18:37.330 ]' 00:18:37.330 16:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.330 16:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.331 16:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.331 16:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:37.331 16:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.331 16:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.331 16:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.331 16:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.590 16:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.530 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.790 00:18:38.790 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.790 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.790 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.790 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.790 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.790 16:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.790 16:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.790 16:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.790 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.790 { 00:18:38.790 "cntlid": 71, 00:18:38.790 "qid": 0, 00:18:38.790 "state": "enabled", 00:18:38.790 "thread": "nvmf_tgt_poll_group_000", 00:18:38.790 "listen_address": { 00:18:38.790 "trtype": "TCP", 00:18:38.790 "adrfam": "IPv4", 00:18:38.790 "traddr": "10.0.0.2", 00:18:38.790 "trsvcid": "4420" 00:18:38.790 }, 00:18:38.790 "peer_address": { 00:18:38.790 "trtype": "TCP", 00:18:38.790 "adrfam": "IPv4", 00:18:38.790 "traddr": "10.0.0.1", 00:18:38.790 "trsvcid": "38268" 00:18:38.790 }, 00:18:38.790 "auth": { 00:18:38.790 "state": "completed", 00:18:38.790 "digest": "sha384", 00:18:38.790 "dhgroup": "ffdhe3072" 00:18:38.790 } 00:18:38.790 } 00:18:38.790 ]' 00:18:38.790 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.050 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.050 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.050 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:39.050 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.050 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.050 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.050 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.310 16:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:18:39.881 16:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.881 16:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.881 16:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.881 16:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.881 16:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.881 16:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.881 16:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.882 16:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:39.882 16:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.141 16:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:40.141 16:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.141 16:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.141 16:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:40.141 16:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:40.141 16:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.141 16:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.142 16:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.142 16:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.142 16:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.142 16:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.142 16:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.401 00:18:40.401 16:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.401 16:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.401 16:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.661 16:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.661 16:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.661 16:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.661 16:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.661 16:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.661 16:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.661 { 00:18:40.661 "cntlid": 73, 00:18:40.661 "qid": 0, 00:18:40.661 "state": "enabled", 00:18:40.661 "thread": "nvmf_tgt_poll_group_000", 00:18:40.661 "listen_address": { 00:18:40.661 "trtype": "TCP", 00:18:40.661 "adrfam": "IPv4", 00:18:40.661 "traddr": "10.0.0.2", 00:18:40.661 "trsvcid": "4420" 00:18:40.661 }, 00:18:40.661 "peer_address": { 00:18:40.661 "trtype": "TCP", 00:18:40.661 "adrfam": "IPv4", 00:18:40.661 "traddr": "10.0.0.1", 00:18:40.661 "trsvcid": "38302" 00:18:40.661 }, 00:18:40.661 "auth": { 00:18:40.661 "state": "completed", 00:18:40.661 "digest": "sha384", 00:18:40.661 "dhgroup": "ffdhe4096" 00:18:40.661 } 00:18:40.661 } 00:18:40.661 ]' 00:18:40.661 16:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.661 16:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.661 16:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.661 16:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:40.661 16:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.661 16:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.661 16:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.661 16:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.921 16:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:18:41.491 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.753 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.014 00:18:42.014 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.014 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.014 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.279 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.279 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.279 16:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.279 16:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.279 16:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.279 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.279 { 00:18:42.279 "cntlid": 75, 00:18:42.279 "qid": 0, 00:18:42.279 "state": "enabled", 00:18:42.279 "thread": "nvmf_tgt_poll_group_000", 00:18:42.279 "listen_address": { 00:18:42.279 "trtype": "TCP", 00:18:42.279 "adrfam": "IPv4", 00:18:42.279 "traddr": "10.0.0.2", 00:18:42.279 "trsvcid": "4420" 00:18:42.279 }, 00:18:42.279 "peer_address": { 00:18:42.279 "trtype": "TCP", 00:18:42.279 "adrfam": "IPv4", 00:18:42.279 "traddr": "10.0.0.1", 00:18:42.279 "trsvcid": "45770" 00:18:42.279 }, 00:18:42.279 "auth": { 00:18:42.279 "state": "completed", 00:18:42.279 "digest": "sha384", 00:18:42.279 "dhgroup": "ffdhe4096" 00:18:42.279 } 00:18:42.279 } 00:18:42.279 ]' 00:18:42.279 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.279 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.279 16:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.279 16:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:42.279 16:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.279 16:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.279 16:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.279 16:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.540 16:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:18:43.482 16:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.482 16:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.482 16:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.482 16:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.482 16:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.482 16:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.482 16:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:43.482 16:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:43.482 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:43.482 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.482 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:43.482 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:43.482 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:43.482 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.482 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.482 16:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.482 16:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.482 16:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.482 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.482 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.743 00:18:43.743 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.743 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.743 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.743 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.004 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.004 16:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.005 16:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.005 16:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.005 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.005 { 00:18:44.005 "cntlid": 77, 00:18:44.005 "qid": 0, 00:18:44.005 "state": "enabled", 00:18:44.005 "thread": "nvmf_tgt_poll_group_000", 00:18:44.005 "listen_address": { 00:18:44.005 "trtype": "TCP", 00:18:44.005 "adrfam": "IPv4", 00:18:44.005 "traddr": "10.0.0.2", 00:18:44.005 "trsvcid": "4420" 00:18:44.005 }, 00:18:44.005 "peer_address": { 00:18:44.005 "trtype": "TCP", 00:18:44.005 "adrfam": "IPv4", 00:18:44.005 "traddr": "10.0.0.1", 00:18:44.005 "trsvcid": "45804" 00:18:44.005 }, 00:18:44.005 "auth": { 00:18:44.005 "state": "completed", 00:18:44.005 "digest": "sha384", 00:18:44.005 "dhgroup": "ffdhe4096" 00:18:44.005 } 00:18:44.005 } 00:18:44.005 ]' 00:18:44.005 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.005 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.005 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.005 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:44.005 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.005 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.005 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.005 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.266 16:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:18:44.837 16:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.098 16:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.357 00:18:45.357 16:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.357 16:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.357 16:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.617 16:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.617 16:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.617 16:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.617 16:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.617 16:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.617 16:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.617 { 00:18:45.617 "cntlid": 79, 00:18:45.617 "qid": 0, 00:18:45.617 "state": "enabled", 00:18:45.617 "thread": "nvmf_tgt_poll_group_000", 00:18:45.617 "listen_address": { 00:18:45.617 "trtype": "TCP", 00:18:45.617 "adrfam": "IPv4", 00:18:45.617 "traddr": "10.0.0.2", 00:18:45.617 "trsvcid": "4420" 00:18:45.617 }, 00:18:45.617 "peer_address": { 00:18:45.617 "trtype": "TCP", 00:18:45.617 "adrfam": "IPv4", 00:18:45.617 "traddr": "10.0.0.1", 00:18:45.617 "trsvcid": "45822" 00:18:45.617 }, 00:18:45.617 "auth": { 00:18:45.617 "state": "completed", 00:18:45.617 "digest": "sha384", 00:18:45.617 "dhgroup": "ffdhe4096" 00:18:45.617 } 00:18:45.617 } 00:18:45.617 ]' 00:18:45.617 16:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.617 16:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.617 16:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.617 16:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:45.617 16:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.617 16:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.617 16:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.617 16:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.876 16:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:18:46.446 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.707 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.278 00:18:47.278 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.278 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.278 16:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.278 16:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.278 16:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.278 16:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.278 16:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.278 16:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.278 16:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.278 { 00:18:47.278 "cntlid": 81, 00:18:47.278 "qid": 0, 00:18:47.278 "state": "enabled", 00:18:47.278 "thread": "nvmf_tgt_poll_group_000", 00:18:47.278 "listen_address": { 00:18:47.278 "trtype": "TCP", 00:18:47.278 "adrfam": "IPv4", 00:18:47.278 "traddr": "10.0.0.2", 00:18:47.278 "trsvcid": "4420" 00:18:47.278 }, 00:18:47.278 "peer_address": { 00:18:47.278 "trtype": "TCP", 00:18:47.278 "adrfam": "IPv4", 00:18:47.278 "traddr": "10.0.0.1", 00:18:47.278 "trsvcid": "45846" 00:18:47.278 }, 00:18:47.278 "auth": { 00:18:47.278 "state": "completed", 00:18:47.278 "digest": "sha384", 00:18:47.278 "dhgroup": "ffdhe6144" 00:18:47.278 } 00:18:47.278 } 00:18:47.278 ]' 00:18:47.278 16:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.278 16:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.278 16:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.278 16:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:47.278 16:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.548 16:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.549 16:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.549 16:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.549 16:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.538 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.798 00:18:48.798 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.798 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.798 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.058 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.058 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.058 16:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.058 16:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.058 16:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.058 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.058 { 00:18:49.058 "cntlid": 83, 00:18:49.058 "qid": 0, 00:18:49.058 "state": "enabled", 00:18:49.058 "thread": "nvmf_tgt_poll_group_000", 00:18:49.058 "listen_address": { 00:18:49.058 "trtype": "TCP", 00:18:49.058 "adrfam": "IPv4", 00:18:49.058 "traddr": "10.0.0.2", 00:18:49.058 "trsvcid": "4420" 00:18:49.058 }, 00:18:49.058 "peer_address": { 00:18:49.058 "trtype": "TCP", 00:18:49.058 "adrfam": "IPv4", 00:18:49.058 "traddr": "10.0.0.1", 00:18:49.058 "trsvcid": "45868" 00:18:49.058 }, 00:18:49.058 "auth": { 00:18:49.058 "state": "completed", 00:18:49.058 "digest": "sha384", 00:18:49.058 "dhgroup": "ffdhe6144" 00:18:49.059 } 00:18:49.059 } 00:18:49.059 ]' 00:18:49.059 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.059 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.059 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.059 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:49.059 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.318 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.318 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.318 16:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.318 16:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:18:50.259 16:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.259 16:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.259 16:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.259 16:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.259 16:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.259 16:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.259 16:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:50.259 16:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:50.259 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:50.260 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.260 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:50.260 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:50.260 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:50.260 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.260 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.260 16:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.260 16:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.260 16:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.260 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.260 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.520 00:18:50.780 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.780 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.780 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.780 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.780 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.780 16:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.780 16:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.780 16:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.780 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.780 { 00:18:50.780 "cntlid": 85, 00:18:50.780 "qid": 0, 00:18:50.780 "state": "enabled", 00:18:50.780 "thread": "nvmf_tgt_poll_group_000", 00:18:50.780 "listen_address": { 00:18:50.780 "trtype": "TCP", 00:18:50.780 "adrfam": "IPv4", 00:18:50.780 "traddr": "10.0.0.2", 00:18:50.780 "trsvcid": "4420" 00:18:50.780 }, 00:18:50.780 "peer_address": { 00:18:50.780 "trtype": "TCP", 00:18:50.780 "adrfam": "IPv4", 00:18:50.780 "traddr": "10.0.0.1", 00:18:50.780 "trsvcid": "45890" 00:18:50.780 }, 00:18:50.780 "auth": { 00:18:50.780 "state": "completed", 00:18:50.780 "digest": "sha384", 00:18:50.780 "dhgroup": "ffdhe6144" 00:18:50.780 } 00:18:50.780 } 00:18:50.780 ]' 00:18:50.780 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.780 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.780 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.040 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:51.040 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.040 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.040 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.040 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.040 16:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.981 16:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.552 00:18:52.552 16:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.552 16:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.552 16:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.552 16:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.552 16:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.552 16:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.552 16:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.552 16:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.552 16:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.552 { 00:18:52.552 "cntlid": 87, 00:18:52.552 "qid": 0, 00:18:52.552 "state": "enabled", 00:18:52.552 "thread": "nvmf_tgt_poll_group_000", 00:18:52.552 "listen_address": { 00:18:52.552 "trtype": "TCP", 00:18:52.552 "adrfam": "IPv4", 00:18:52.552 "traddr": "10.0.0.2", 00:18:52.552 "trsvcid": "4420" 00:18:52.552 }, 00:18:52.552 "peer_address": { 00:18:52.552 "trtype": "TCP", 00:18:52.552 "adrfam": "IPv4", 00:18:52.552 "traddr": "10.0.0.1", 00:18:52.552 "trsvcid": "49022" 00:18:52.552 }, 00:18:52.552 "auth": { 00:18:52.552 "state": "completed", 00:18:52.552 "digest": "sha384", 00:18:52.552 "dhgroup": "ffdhe6144" 00:18:52.552 } 00:18:52.552 } 00:18:52.552 ]' 00:18:52.552 16:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.552 16:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.552 16:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.552 16:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:52.552 16:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.812 16:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.812 16:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.812 16:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.812 16:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.754 16:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.324 00:18:54.324 16:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.324 16:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.324 16:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.583 16:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.583 16:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.583 16:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.583 16:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.583 16:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.583 16:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.583 { 00:18:54.583 "cntlid": 89, 00:18:54.583 "qid": 0, 00:18:54.583 "state": "enabled", 00:18:54.583 "thread": "nvmf_tgt_poll_group_000", 00:18:54.583 "listen_address": { 00:18:54.583 "trtype": "TCP", 00:18:54.583 "adrfam": "IPv4", 00:18:54.583 "traddr": "10.0.0.2", 00:18:54.583 "trsvcid": "4420" 00:18:54.583 }, 00:18:54.583 "peer_address": { 00:18:54.583 "trtype": "TCP", 00:18:54.583 "adrfam": "IPv4", 00:18:54.583 "traddr": "10.0.0.1", 00:18:54.583 "trsvcid": "49048" 00:18:54.583 }, 00:18:54.583 "auth": { 00:18:54.583 "state": "completed", 00:18:54.583 "digest": "sha384", 00:18:54.583 "dhgroup": "ffdhe8192" 00:18:54.583 } 00:18:54.583 } 00:18:54.583 ]' 00:18:54.583 16:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.583 16:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.583 16:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.583 16:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:54.583 16:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.583 16:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.583 16:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.583 16:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.842 16:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:18:55.412 16:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.412 16:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.412 16:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.412 16:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.673 16:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.673 16:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.673 16:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:55.673 16:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:55.673 16:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:55.673 16:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.673 16:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.673 16:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:55.673 16:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:55.673 16:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.673 16:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.673 16:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.673 16:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.673 16:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.673 16:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.673 16:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.242 00:18:56.242 16:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.242 16:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.242 16:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.502 16:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.502 16:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.502 16:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.502 16:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.502 16:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.502 16:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.502 { 00:18:56.502 "cntlid": 91, 00:18:56.502 "qid": 0, 00:18:56.502 "state": "enabled", 00:18:56.502 "thread": "nvmf_tgt_poll_group_000", 00:18:56.502 "listen_address": { 00:18:56.502 "trtype": "TCP", 00:18:56.502 "adrfam": "IPv4", 00:18:56.502 "traddr": "10.0.0.2", 00:18:56.502 "trsvcid": "4420" 00:18:56.502 }, 00:18:56.502 "peer_address": { 00:18:56.502 "trtype": "TCP", 00:18:56.502 "adrfam": "IPv4", 00:18:56.502 "traddr": "10.0.0.1", 00:18:56.502 "trsvcid": "49092" 00:18:56.502 }, 00:18:56.502 "auth": { 00:18:56.502 "state": "completed", 00:18:56.502 "digest": "sha384", 00:18:56.502 "dhgroup": "ffdhe8192" 00:18:56.502 } 00:18:56.502 } 00:18:56.502 ]' 00:18:56.502 16:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.502 16:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.502 16:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.502 16:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:56.502 16:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.502 16:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.502 16:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.502 16:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.762 16:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:18:57.332 16:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.332 16:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.332 16:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.332 16:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.592 16:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.592 16:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.592 16:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:57.592 16:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:57.592 16:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:57.592 16:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.592 16:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:57.592 16:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:57.592 16:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:57.592 16:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.592 16:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.592 16:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.592 16:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.592 16:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.592 16:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.592 16:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.162 00:18:58.162 16:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.162 16:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.162 16:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.422 16:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.422 16:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.422 16:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.422 16:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.422 16:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.422 16:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.422 { 00:18:58.422 "cntlid": 93, 00:18:58.422 "qid": 0, 00:18:58.422 "state": "enabled", 00:18:58.422 "thread": "nvmf_tgt_poll_group_000", 00:18:58.422 "listen_address": { 00:18:58.422 "trtype": "TCP", 00:18:58.422 "adrfam": "IPv4", 00:18:58.422 "traddr": "10.0.0.2", 00:18:58.422 "trsvcid": "4420" 00:18:58.422 }, 00:18:58.422 "peer_address": { 00:18:58.422 "trtype": "TCP", 00:18:58.422 "adrfam": "IPv4", 00:18:58.422 "traddr": "10.0.0.1", 00:18:58.422 "trsvcid": "49118" 00:18:58.422 }, 00:18:58.422 "auth": { 00:18:58.422 "state": "completed", 00:18:58.422 "digest": "sha384", 00:18:58.422 "dhgroup": "ffdhe8192" 00:18:58.422 } 00:18:58.422 } 00:18:58.422 ]' 00:18:58.422 16:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.422 16:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.422 16:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.422 16:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:58.422 16:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.422 16:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.422 16:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.422 16:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.682 16:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:18:59.253 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.253 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.253 16:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.253 16:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.253 16:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.253 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.253 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:59.253 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:59.514 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:59.514 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.514 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:59.514 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:59.514 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:59.514 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.514 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:59.514 16:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.514 16:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.514 16:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.514 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.514 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.085 00:19:00.086 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.086 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.086 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.346 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.346 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.346 16:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.346 16:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.346 16:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.346 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.346 { 00:19:00.346 "cntlid": 95, 00:19:00.346 "qid": 0, 00:19:00.346 "state": "enabled", 00:19:00.346 "thread": "nvmf_tgt_poll_group_000", 00:19:00.346 "listen_address": { 00:19:00.346 "trtype": "TCP", 00:19:00.346 "adrfam": "IPv4", 00:19:00.346 "traddr": "10.0.0.2", 00:19:00.346 "trsvcid": "4420" 00:19:00.346 }, 00:19:00.346 "peer_address": { 00:19:00.346 "trtype": "TCP", 00:19:00.346 "adrfam": "IPv4", 00:19:00.346 "traddr": "10.0.0.1", 00:19:00.346 "trsvcid": "49152" 00:19:00.346 }, 00:19:00.346 "auth": { 00:19:00.346 "state": "completed", 00:19:00.346 "digest": "sha384", 00:19:00.346 "dhgroup": "ffdhe8192" 00:19:00.346 } 00:19:00.346 } 00:19:00.346 ]' 00:19:00.346 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.346 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.346 16:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.346 16:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:00.347 16:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.347 16:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.347 16:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.347 16:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.607 16:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:19:01.179 16:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.179 16:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.179 16:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.179 16:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.179 16:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.179 16:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:01.179 16:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.179 16:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.179 16:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:01.179 16:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:01.440 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:01.440 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.440 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:01.440 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:01.440 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:01.440 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.440 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.440 16:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.440 16:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.440 16:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.440 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.440 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.701 00:19:01.701 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.701 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.701 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.961 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.961 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.961 16:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.961 16:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.961 16:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.961 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.961 { 00:19:01.961 "cntlid": 97, 00:19:01.961 "qid": 0, 00:19:01.961 "state": "enabled", 00:19:01.961 "thread": "nvmf_tgt_poll_group_000", 00:19:01.961 "listen_address": { 00:19:01.961 "trtype": "TCP", 00:19:01.961 "adrfam": "IPv4", 00:19:01.961 "traddr": "10.0.0.2", 00:19:01.961 "trsvcid": "4420" 00:19:01.961 }, 00:19:01.961 "peer_address": { 00:19:01.961 "trtype": "TCP", 00:19:01.961 "adrfam": "IPv4", 00:19:01.961 "traddr": "10.0.0.1", 00:19:01.961 "trsvcid": "49172" 00:19:01.961 }, 00:19:01.961 "auth": { 00:19:01.961 "state": "completed", 00:19:01.961 "digest": "sha512", 00:19:01.961 "dhgroup": "null" 00:19:01.961 } 00:19:01.961 } 00:19:01.961 ]' 00:19:01.961 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.961 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.961 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.961 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:01.961 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.961 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.961 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.961 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.222 16:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:19:02.794 16:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.794 16:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.794 16:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.794 16:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.794 16:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.794 16:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.794 16:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:02.794 16:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:03.055 16:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:03.055 16:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.055 16:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.055 16:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:03.055 16:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:03.055 16:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.055 16:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.055 16:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.055 16:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.055 16:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.055 16:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.055 16:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.347 00:19:03.347 16:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.347 16:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.347 16:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.347 16:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.347 16:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.347 16:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.347 16:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.347 16:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.347 16:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.347 { 00:19:03.347 "cntlid": 99, 00:19:03.347 "qid": 0, 00:19:03.347 "state": "enabled", 00:19:03.347 "thread": "nvmf_tgt_poll_group_000", 00:19:03.347 "listen_address": { 00:19:03.347 "trtype": "TCP", 00:19:03.347 "adrfam": "IPv4", 00:19:03.347 "traddr": "10.0.0.2", 00:19:03.347 "trsvcid": "4420" 00:19:03.347 }, 00:19:03.347 "peer_address": { 00:19:03.347 "trtype": "TCP", 00:19:03.347 "adrfam": "IPv4", 00:19:03.347 "traddr": "10.0.0.1", 00:19:03.347 "trsvcid": "49360" 00:19:03.347 }, 00:19:03.347 "auth": { 00:19:03.347 "state": "completed", 00:19:03.347 "digest": "sha512", 00:19:03.347 "dhgroup": "null" 00:19:03.347 } 00:19:03.347 } 00:19:03.347 ]' 00:19:03.347 16:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.608 16:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.608 16:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.608 16:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:03.608 16:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.608 16:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.608 16:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.608 16:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.868 16:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:19:04.440 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.440 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.440 16:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.440 16:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.440 16:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.440 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.440 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:04.440 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:04.700 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:04.700 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.700 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.700 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:04.700 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.700 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.700 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.700 16:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.700 16:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.700 16:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.700 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.700 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.960 00:19:04.960 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.960 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.960 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.960 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.960 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.960 16:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.960 16:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.960 16:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.960 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.960 { 00:19:04.960 "cntlid": 101, 00:19:04.960 "qid": 0, 00:19:04.960 "state": "enabled", 00:19:04.960 "thread": "nvmf_tgt_poll_group_000", 00:19:04.960 "listen_address": { 00:19:04.960 "trtype": "TCP", 00:19:04.960 "adrfam": "IPv4", 00:19:04.960 "traddr": "10.0.0.2", 00:19:04.960 "trsvcid": "4420" 00:19:04.960 }, 00:19:04.960 "peer_address": { 00:19:04.960 "trtype": "TCP", 00:19:04.960 "adrfam": "IPv4", 00:19:04.960 "traddr": "10.0.0.1", 00:19:04.960 "trsvcid": "49390" 00:19:04.960 }, 00:19:04.960 "auth": { 00:19:04.960 "state": "completed", 00:19:04.960 "digest": "sha512", 00:19:04.960 "dhgroup": "null" 00:19:04.960 } 00:19:04.960 } 00:19:04.960 ]' 00:19:04.960 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.220 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.220 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.220 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:05.220 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.220 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.220 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.220 16:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.480 16:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:19:06.050 16:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.050 16:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.050 16:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.050 16:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.050 16:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.050 16:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.050 16:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:06.050 16:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:06.311 16:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:06.311 16:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.311 16:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.311 16:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:06.311 16:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:06.311 16:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.311 16:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:06.311 16:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.311 16:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.311 16:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.311 16:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.311 16:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.573 00:19:06.573 16:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.573 16:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.573 16:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.573 16:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.573 16:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.573 16:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.573 16:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.573 16:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.573 16:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.573 { 00:19:06.573 "cntlid": 103, 00:19:06.573 "qid": 0, 00:19:06.573 "state": "enabled", 00:19:06.573 "thread": "nvmf_tgt_poll_group_000", 00:19:06.573 "listen_address": { 00:19:06.573 "trtype": "TCP", 00:19:06.573 "adrfam": "IPv4", 00:19:06.573 "traddr": "10.0.0.2", 00:19:06.573 "trsvcid": "4420" 00:19:06.573 }, 00:19:06.573 "peer_address": { 00:19:06.573 "trtype": "TCP", 00:19:06.573 "adrfam": "IPv4", 00:19:06.573 "traddr": "10.0.0.1", 00:19:06.573 "trsvcid": "49398" 00:19:06.573 }, 00:19:06.573 "auth": { 00:19:06.573 "state": "completed", 00:19:06.573 "digest": "sha512", 00:19:06.573 "dhgroup": "null" 00:19:06.573 } 00:19:06.573 } 00:19:06.573 ]' 00:19:06.573 16:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.833 16:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.833 16:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.833 16:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:06.833 16:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.833 16:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.833 16:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.833 16:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.094 16:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:19:07.665 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.665 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.665 16:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.665 16:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.665 16:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.665 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.665 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.665 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.665 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.926 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:07.926 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.926 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.926 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:07.926 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.926 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.926 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.926 16:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.926 16:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.926 16:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.926 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.926 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.185 00:19:08.186 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.186 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.186 16:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.186 16:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.186 16:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.186 16:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.186 16:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.446 16:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.446 16:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.446 { 00:19:08.446 "cntlid": 105, 00:19:08.446 "qid": 0, 00:19:08.446 "state": "enabled", 00:19:08.446 "thread": "nvmf_tgt_poll_group_000", 00:19:08.446 "listen_address": { 00:19:08.446 "trtype": "TCP", 00:19:08.446 "adrfam": "IPv4", 00:19:08.446 "traddr": "10.0.0.2", 00:19:08.446 "trsvcid": "4420" 00:19:08.446 }, 00:19:08.446 "peer_address": { 00:19:08.446 "trtype": "TCP", 00:19:08.446 "adrfam": "IPv4", 00:19:08.446 "traddr": "10.0.0.1", 00:19:08.446 "trsvcid": "49418" 00:19:08.446 }, 00:19:08.446 "auth": { 00:19:08.446 "state": "completed", 00:19:08.446 "digest": "sha512", 00:19:08.446 "dhgroup": "ffdhe2048" 00:19:08.446 } 00:19:08.446 } 00:19:08.446 ]' 00:19:08.446 16:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.446 16:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.446 16:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.446 16:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:08.446 16:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.446 16:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.446 16:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.446 16:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.706 16:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:19:09.276 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.276 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.276 16:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.276 16:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.276 16:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.276 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.276 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.276 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.537 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:09.537 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.537 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.537 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:09.537 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:09.537 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.537 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.537 16:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.537 16:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.537 16:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.537 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.537 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.796 00:19:09.796 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.796 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.796 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.056 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.056 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.056 16:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.056 16:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.056 16:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.056 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.056 { 00:19:10.056 "cntlid": 107, 00:19:10.056 "qid": 0, 00:19:10.056 "state": "enabled", 00:19:10.056 "thread": "nvmf_tgt_poll_group_000", 00:19:10.056 "listen_address": { 00:19:10.056 "trtype": "TCP", 00:19:10.056 "adrfam": "IPv4", 00:19:10.056 "traddr": "10.0.0.2", 00:19:10.056 "trsvcid": "4420" 00:19:10.056 }, 00:19:10.056 "peer_address": { 00:19:10.056 "trtype": "TCP", 00:19:10.056 "adrfam": "IPv4", 00:19:10.056 "traddr": "10.0.0.1", 00:19:10.056 "trsvcid": "49454" 00:19:10.056 }, 00:19:10.056 "auth": { 00:19:10.056 "state": "completed", 00:19:10.056 "digest": "sha512", 00:19:10.056 "dhgroup": "ffdhe2048" 00:19:10.056 } 00:19:10.056 } 00:19:10.056 ]' 00:19:10.056 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.056 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.057 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.057 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:10.057 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.057 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.057 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.057 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.316 16:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:19:10.886 16:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.886 16:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.886 16:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.886 16:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.886 16:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.886 16:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.886 16:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:10.887 16:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:11.148 16:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:11.148 16:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.148 16:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:11.148 16:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:11.148 16:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:11.148 16:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.148 16:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.148 16:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.148 16:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.148 16:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.148 16:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.148 16:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.409 00:19:11.409 16:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.409 16:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.409 16:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.409 16:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.409 16:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.409 16:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.409 16:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.409 16:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.409 16:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.409 { 00:19:11.409 "cntlid": 109, 00:19:11.409 "qid": 0, 00:19:11.409 "state": "enabled", 00:19:11.409 "thread": "nvmf_tgt_poll_group_000", 00:19:11.409 "listen_address": { 00:19:11.409 "trtype": "TCP", 00:19:11.409 "adrfam": "IPv4", 00:19:11.409 "traddr": "10.0.0.2", 00:19:11.409 "trsvcid": "4420" 00:19:11.409 }, 00:19:11.409 "peer_address": { 00:19:11.409 "trtype": "TCP", 00:19:11.409 "adrfam": "IPv4", 00:19:11.409 "traddr": "10.0.0.1", 00:19:11.409 "trsvcid": "49478" 00:19:11.409 }, 00:19:11.409 "auth": { 00:19:11.409 "state": "completed", 00:19:11.409 "digest": "sha512", 00:19:11.409 "dhgroup": "ffdhe2048" 00:19:11.409 } 00:19:11.409 } 00:19:11.409 ]' 00:19:11.409 16:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.670 16:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.670 16:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.670 16:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.670 16:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.670 16:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.670 16:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.670 16:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.930 16:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:19:12.501 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.501 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.501 16:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.501 16:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.501 16:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.501 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.501 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.501 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.759 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:12.759 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.759 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.759 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:12.759 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.760 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.760 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:12.760 16:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.760 16:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.760 16:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.760 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.760 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.020 00:19:13.020 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.020 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.020 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.279 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.279 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.279 16:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.279 16:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.279 16:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.279 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.279 { 00:19:13.279 "cntlid": 111, 00:19:13.279 "qid": 0, 00:19:13.279 "state": "enabled", 00:19:13.279 "thread": "nvmf_tgt_poll_group_000", 00:19:13.279 "listen_address": { 00:19:13.279 "trtype": "TCP", 00:19:13.279 "adrfam": "IPv4", 00:19:13.279 "traddr": "10.0.0.2", 00:19:13.279 "trsvcid": "4420" 00:19:13.279 }, 00:19:13.279 "peer_address": { 00:19:13.279 "trtype": "TCP", 00:19:13.279 "adrfam": "IPv4", 00:19:13.279 "traddr": "10.0.0.1", 00:19:13.279 "trsvcid": "40010" 00:19:13.279 }, 00:19:13.279 "auth": { 00:19:13.279 "state": "completed", 00:19:13.279 "digest": "sha512", 00:19:13.279 "dhgroup": "ffdhe2048" 00:19:13.279 } 00:19:13.279 } 00:19:13.279 ]' 00:19:13.279 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.279 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.279 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.279 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.279 16:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.279 16:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.279 16:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.279 16:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.539 16:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:19:14.107 16:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.107 16:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.107 16:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.107 16:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.107 16:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.107 16:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.107 16:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.107 16:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.107 16:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.366 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:14.366 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.366 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.366 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:14.366 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:14.366 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.366 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.367 16:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.367 16:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.367 16:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.367 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.367 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.626 00:19:14.626 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.626 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.626 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.886 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.886 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.886 16:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.886 16:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.886 16:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.886 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.886 { 00:19:14.886 "cntlid": 113, 00:19:14.886 "qid": 0, 00:19:14.886 "state": "enabled", 00:19:14.886 "thread": "nvmf_tgt_poll_group_000", 00:19:14.886 "listen_address": { 00:19:14.886 "trtype": "TCP", 00:19:14.886 "adrfam": "IPv4", 00:19:14.886 "traddr": "10.0.0.2", 00:19:14.886 "trsvcid": "4420" 00:19:14.886 }, 00:19:14.886 "peer_address": { 00:19:14.886 "trtype": "TCP", 00:19:14.886 "adrfam": "IPv4", 00:19:14.886 "traddr": "10.0.0.1", 00:19:14.886 "trsvcid": "40024" 00:19:14.886 }, 00:19:14.886 "auth": { 00:19:14.886 "state": "completed", 00:19:14.886 "digest": "sha512", 00:19:14.886 "dhgroup": "ffdhe3072" 00:19:14.886 } 00:19:14.886 } 00:19:14.886 ]' 00:19:14.886 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.886 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.886 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.886 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:14.886 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.886 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.886 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.886 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.147 16:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:19:15.718 16:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.718 16:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:15.718 16:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.718 16:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.718 16:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.718 16:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.718 16:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:15.718 16:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:15.979 16:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:15.979 16:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.979 16:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.979 16:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:15.979 16:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.979 16:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.979 16:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.979 16:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.979 16:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.979 16:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.979 16:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.979 16:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.240 00:19:16.240 16:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.240 16:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.240 16:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.501 16:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.501 16:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.501 16:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.501 16:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.501 16:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.501 16:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.501 { 00:19:16.501 "cntlid": 115, 00:19:16.501 "qid": 0, 00:19:16.501 "state": "enabled", 00:19:16.501 "thread": "nvmf_tgt_poll_group_000", 00:19:16.501 "listen_address": { 00:19:16.501 "trtype": "TCP", 00:19:16.501 "adrfam": "IPv4", 00:19:16.501 "traddr": "10.0.0.2", 00:19:16.501 "trsvcid": "4420" 00:19:16.501 }, 00:19:16.501 "peer_address": { 00:19:16.501 "trtype": "TCP", 00:19:16.501 "adrfam": "IPv4", 00:19:16.501 "traddr": "10.0.0.1", 00:19:16.501 "trsvcid": "40060" 00:19:16.501 }, 00:19:16.501 "auth": { 00:19:16.501 "state": "completed", 00:19:16.501 "digest": "sha512", 00:19:16.501 "dhgroup": "ffdhe3072" 00:19:16.501 } 00:19:16.501 } 00:19:16.501 ]' 00:19:16.501 16:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.501 16:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.501 16:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.501 16:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:16.501 16:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.501 16:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.501 16:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.501 16:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.760 16:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:19:17.737 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.737 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.737 16:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.737 16:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.737 16:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.737 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.737 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.737 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.737 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:17.737 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.737 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:17.737 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:17.737 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:17.737 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.738 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.738 16:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.738 16:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.738 16:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.738 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.738 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.998 00:19:17.998 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.998 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.998 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.998 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.998 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.998 16:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.998 16:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.998 16:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.998 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.998 { 00:19:17.998 "cntlid": 117, 00:19:17.998 "qid": 0, 00:19:17.998 "state": "enabled", 00:19:17.998 "thread": "nvmf_tgt_poll_group_000", 00:19:17.998 "listen_address": { 00:19:17.998 "trtype": "TCP", 00:19:17.998 "adrfam": "IPv4", 00:19:17.998 "traddr": "10.0.0.2", 00:19:17.998 "trsvcid": "4420" 00:19:17.998 }, 00:19:17.998 "peer_address": { 00:19:17.998 "trtype": "TCP", 00:19:17.998 "adrfam": "IPv4", 00:19:17.998 "traddr": "10.0.0.1", 00:19:17.998 "trsvcid": "40084" 00:19:17.998 }, 00:19:17.998 "auth": { 00:19:17.998 "state": "completed", 00:19:17.998 "digest": "sha512", 00:19:17.998 "dhgroup": "ffdhe3072" 00:19:17.998 } 00:19:17.998 } 00:19:17.998 ]' 00:19:17.998 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.259 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.259 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.259 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:18.259 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.259 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.259 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.259 16:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.259 16:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:19:19.201 16:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.201 16:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.201 16:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.201 16:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.201 16:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.201 16:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.201 16:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.201 16:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.201 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:19.201 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.201 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.201 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:19.201 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:19.202 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.202 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:19.202 16:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.202 16:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.202 16:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.202 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.202 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.462 00:19:19.462 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.462 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.462 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.723 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.723 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.723 16:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.723 16:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.723 16:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.723 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.723 { 00:19:19.723 "cntlid": 119, 00:19:19.723 "qid": 0, 00:19:19.723 "state": "enabled", 00:19:19.723 "thread": "nvmf_tgt_poll_group_000", 00:19:19.723 "listen_address": { 00:19:19.723 "trtype": "TCP", 00:19:19.723 "adrfam": "IPv4", 00:19:19.723 "traddr": "10.0.0.2", 00:19:19.723 "trsvcid": "4420" 00:19:19.723 }, 00:19:19.723 "peer_address": { 00:19:19.723 "trtype": "TCP", 00:19:19.723 "adrfam": "IPv4", 00:19:19.723 "traddr": "10.0.0.1", 00:19:19.723 "trsvcid": "40098" 00:19:19.723 }, 00:19:19.723 "auth": { 00:19:19.723 "state": "completed", 00:19:19.723 "digest": "sha512", 00:19:19.723 "dhgroup": "ffdhe3072" 00:19:19.723 } 00:19:19.723 } 00:19:19.723 ]' 00:19:19.723 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.723 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.723 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.723 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:19.723 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.723 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.723 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.723 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.984 16:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.926 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.187 00:19:21.187 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.187 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.187 16:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.448 16:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.448 16:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.448 16:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.448 16:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.448 16:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.448 16:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.448 { 00:19:21.448 "cntlid": 121, 00:19:21.448 "qid": 0, 00:19:21.448 "state": "enabled", 00:19:21.448 "thread": "nvmf_tgt_poll_group_000", 00:19:21.448 "listen_address": { 00:19:21.448 "trtype": "TCP", 00:19:21.448 "adrfam": "IPv4", 00:19:21.448 "traddr": "10.0.0.2", 00:19:21.448 "trsvcid": "4420" 00:19:21.448 }, 00:19:21.448 "peer_address": { 00:19:21.448 "trtype": "TCP", 00:19:21.448 "adrfam": "IPv4", 00:19:21.448 "traddr": "10.0.0.1", 00:19:21.448 "trsvcid": "40134" 00:19:21.448 }, 00:19:21.448 "auth": { 00:19:21.448 "state": "completed", 00:19:21.448 "digest": "sha512", 00:19:21.448 "dhgroup": "ffdhe4096" 00:19:21.448 } 00:19:21.448 } 00:19:21.448 ]' 00:19:21.448 16:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.448 16:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.448 16:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.448 16:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:21.448 16:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.448 16:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.448 16:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.448 16:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.709 16:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:19:22.281 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.542 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.803 00:19:22.803 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.803 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.803 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.064 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.064 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.064 16:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.064 16:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.064 16:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.064 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.064 { 00:19:23.064 "cntlid": 123, 00:19:23.064 "qid": 0, 00:19:23.064 "state": "enabled", 00:19:23.064 "thread": "nvmf_tgt_poll_group_000", 00:19:23.064 "listen_address": { 00:19:23.064 "trtype": "TCP", 00:19:23.064 "adrfam": "IPv4", 00:19:23.064 "traddr": "10.0.0.2", 00:19:23.064 "trsvcid": "4420" 00:19:23.064 }, 00:19:23.064 "peer_address": { 00:19:23.064 "trtype": "TCP", 00:19:23.064 "adrfam": "IPv4", 00:19:23.064 "traddr": "10.0.0.1", 00:19:23.064 "trsvcid": "36728" 00:19:23.064 }, 00:19:23.064 "auth": { 00:19:23.064 "state": "completed", 00:19:23.064 "digest": "sha512", 00:19:23.064 "dhgroup": "ffdhe4096" 00:19:23.064 } 00:19:23.064 } 00:19:23.064 ]' 00:19:23.064 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.064 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.064 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.064 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:23.064 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.064 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.064 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.064 16:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.325 16:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:19:24.270 16:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.270 16:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.270 16:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.270 16:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.270 16:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.270 16:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.270 16:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:24.270 16:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:24.270 16:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:24.270 16:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.270 16:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:24.270 16:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:24.270 16:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:24.270 16:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.271 16:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.271 16:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.271 16:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.271 16:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.271 16:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.271 16:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.531 00:19:24.531 16:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.531 16:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.531 16:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.531 16:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.531 16:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.531 16:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.531 16:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.792 16:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.792 16:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.792 { 00:19:24.792 "cntlid": 125, 00:19:24.792 "qid": 0, 00:19:24.792 "state": "enabled", 00:19:24.792 "thread": "nvmf_tgt_poll_group_000", 00:19:24.792 "listen_address": { 00:19:24.792 "trtype": "TCP", 00:19:24.792 "adrfam": "IPv4", 00:19:24.792 "traddr": "10.0.0.2", 00:19:24.792 "trsvcid": "4420" 00:19:24.792 }, 00:19:24.792 "peer_address": { 00:19:24.792 "trtype": "TCP", 00:19:24.792 "adrfam": "IPv4", 00:19:24.792 "traddr": "10.0.0.1", 00:19:24.792 "trsvcid": "36758" 00:19:24.792 }, 00:19:24.792 "auth": { 00:19:24.792 "state": "completed", 00:19:24.792 "digest": "sha512", 00:19:24.792 "dhgroup": "ffdhe4096" 00:19:24.792 } 00:19:24.792 } 00:19:24.792 ]' 00:19:24.792 16:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.792 16:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.792 16:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.792 16:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:24.792 16:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.792 16:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.792 16:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.792 16:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.053 16:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:19:25.624 16:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.624 16:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.624 16:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.624 16:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.624 16:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.624 16:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.624 16:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.624 16:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.885 16:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:25.885 16:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.885 16:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.885 16:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:25.885 16:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:25.885 16:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.885 16:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:25.885 16:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.885 16:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.885 16:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.885 16:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:25.885 16:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.145 00:19:26.145 16:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.145 16:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.145 16:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.408 16:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.408 16:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.408 16:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.408 16:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.408 16:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.408 16:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.408 { 00:19:26.408 "cntlid": 127, 00:19:26.408 "qid": 0, 00:19:26.408 "state": "enabled", 00:19:26.408 "thread": "nvmf_tgt_poll_group_000", 00:19:26.408 "listen_address": { 00:19:26.408 "trtype": "TCP", 00:19:26.408 "adrfam": "IPv4", 00:19:26.408 "traddr": "10.0.0.2", 00:19:26.408 "trsvcid": "4420" 00:19:26.408 }, 00:19:26.408 "peer_address": { 00:19:26.408 "trtype": "TCP", 00:19:26.408 "adrfam": "IPv4", 00:19:26.408 "traddr": "10.0.0.1", 00:19:26.408 "trsvcid": "36788" 00:19:26.408 }, 00:19:26.408 "auth": { 00:19:26.408 "state": "completed", 00:19:26.408 "digest": "sha512", 00:19:26.408 "dhgroup": "ffdhe4096" 00:19:26.408 } 00:19:26.408 } 00:19:26.408 ]' 00:19:26.408 16:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.408 16:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.408 16:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.408 16:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.408 16:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.408 16:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.408 16:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.408 16:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.669 16:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:19:27.610 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.610 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.610 16:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.610 16:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.610 16:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.610 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.610 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.610 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:27.610 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:27.610 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:27.610 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.610 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.610 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:27.610 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:27.611 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.611 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.611 16:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.611 16:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.611 16:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.611 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.611 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.870 00:19:27.870 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.870 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.870 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.130 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.130 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.130 16:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.130 16:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.130 16:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.130 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.130 { 00:19:28.130 "cntlid": 129, 00:19:28.130 "qid": 0, 00:19:28.130 "state": "enabled", 00:19:28.130 "thread": "nvmf_tgt_poll_group_000", 00:19:28.130 "listen_address": { 00:19:28.130 "trtype": "TCP", 00:19:28.130 "adrfam": "IPv4", 00:19:28.130 "traddr": "10.0.0.2", 00:19:28.130 "trsvcid": "4420" 00:19:28.130 }, 00:19:28.130 "peer_address": { 00:19:28.130 "trtype": "TCP", 00:19:28.130 "adrfam": "IPv4", 00:19:28.130 "traddr": "10.0.0.1", 00:19:28.130 "trsvcid": "36834" 00:19:28.130 }, 00:19:28.130 "auth": { 00:19:28.130 "state": "completed", 00:19:28.131 "digest": "sha512", 00:19:28.131 "dhgroup": "ffdhe6144" 00:19:28.131 } 00:19:28.131 } 00:19:28.131 ]' 00:19:28.131 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.131 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.131 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.131 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:28.131 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.131 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.131 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.131 16:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.391 16:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:19:29.334 16:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.334 16:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.334 16:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.334 16:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.334 16:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.334 16:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.334 16:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.334 16:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.334 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:29.334 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.334 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:29.334 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:29.334 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:29.334 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.334 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.334 16:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.334 16:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.334 16:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.334 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.334 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.594 00:19:29.854 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.854 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.854 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.854 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.854 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.854 16:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.854 16:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.854 16:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.854 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.854 { 00:19:29.854 "cntlid": 131, 00:19:29.854 "qid": 0, 00:19:29.854 "state": "enabled", 00:19:29.854 "thread": "nvmf_tgt_poll_group_000", 00:19:29.854 "listen_address": { 00:19:29.854 "trtype": "TCP", 00:19:29.854 "adrfam": "IPv4", 00:19:29.854 "traddr": "10.0.0.2", 00:19:29.854 "trsvcid": "4420" 00:19:29.854 }, 00:19:29.854 "peer_address": { 00:19:29.854 "trtype": "TCP", 00:19:29.854 "adrfam": "IPv4", 00:19:29.854 "traddr": "10.0.0.1", 00:19:29.854 "trsvcid": "36872" 00:19:29.854 }, 00:19:29.854 "auth": { 00:19:29.854 "state": "completed", 00:19:29.854 "digest": "sha512", 00:19:29.854 "dhgroup": "ffdhe6144" 00:19:29.854 } 00:19:29.854 } 00:19:29.854 ]' 00:19:29.854 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.854 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.854 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.115 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:30.115 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.115 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.115 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.115 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.115 16:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:19:31.058 16:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.058 16:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.058 16:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.058 16:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.058 16:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.058 16:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.058 16:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.058 16:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.058 16:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:31.058 16:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.058 16:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.058 16:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:31.058 16:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:31.058 16:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.058 16:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.058 16:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.058 16:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.059 16:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.059 16:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.059 16:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.629 00:19:31.629 16:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.629 16:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.629 16:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.629 16:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.629 16:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.629 16:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.629 16:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.629 16:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.629 16:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.629 { 00:19:31.629 "cntlid": 133, 00:19:31.629 "qid": 0, 00:19:31.629 "state": "enabled", 00:19:31.629 "thread": "nvmf_tgt_poll_group_000", 00:19:31.629 "listen_address": { 00:19:31.629 "trtype": "TCP", 00:19:31.629 "adrfam": "IPv4", 00:19:31.629 "traddr": "10.0.0.2", 00:19:31.629 "trsvcid": "4420" 00:19:31.629 }, 00:19:31.629 "peer_address": { 00:19:31.629 "trtype": "TCP", 00:19:31.629 "adrfam": "IPv4", 00:19:31.629 "traddr": "10.0.0.1", 00:19:31.629 "trsvcid": "36906" 00:19:31.629 }, 00:19:31.629 "auth": { 00:19:31.629 "state": "completed", 00:19:31.629 "digest": "sha512", 00:19:31.629 "dhgroup": "ffdhe6144" 00:19:31.629 } 00:19:31.629 } 00:19:31.629 ]' 00:19:31.629 16:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.629 16:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.629 16:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.629 16:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:31.629 16:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.889 16:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.889 16:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.889 16:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.889 16:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:19:32.905 16:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.905 16:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.905 16:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.905 16:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.906 16:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.906 16:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.906 16:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.906 16:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.906 16:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:32.906 16:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.906 16:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:32.906 16:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:32.906 16:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:32.906 16:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.906 16:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:32.906 16:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.906 16:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.906 16:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.906 16:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:32.906 16:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.166 00:19:33.166 16:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.166 16:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.166 16:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.427 16:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.427 16:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.427 16:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.427 16:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.427 16:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.427 16:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.427 { 00:19:33.427 "cntlid": 135, 00:19:33.427 "qid": 0, 00:19:33.427 "state": "enabled", 00:19:33.427 "thread": "nvmf_tgt_poll_group_000", 00:19:33.427 "listen_address": { 00:19:33.427 "trtype": "TCP", 00:19:33.427 "adrfam": "IPv4", 00:19:33.427 "traddr": "10.0.0.2", 00:19:33.427 "trsvcid": "4420" 00:19:33.427 }, 00:19:33.427 "peer_address": { 00:19:33.427 "trtype": "TCP", 00:19:33.427 "adrfam": "IPv4", 00:19:33.427 "traddr": "10.0.0.1", 00:19:33.427 "trsvcid": "50964" 00:19:33.427 }, 00:19:33.427 "auth": { 00:19:33.427 "state": "completed", 00:19:33.427 "digest": "sha512", 00:19:33.427 "dhgroup": "ffdhe6144" 00:19:33.427 } 00:19:33.427 } 00:19:33.427 ]' 00:19:33.427 16:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.427 16:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.427 16:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.427 16:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:33.427 16:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.427 16:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.427 16:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.427 16:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.686 16:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:19:34.626 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.627 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.199 00:19:35.199 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.199 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.199 16:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.459 16:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.459 16:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.459 16:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.459 16:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.459 16:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.459 16:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.459 { 00:19:35.459 "cntlid": 137, 00:19:35.459 "qid": 0, 00:19:35.459 "state": "enabled", 00:19:35.459 "thread": "nvmf_tgt_poll_group_000", 00:19:35.459 "listen_address": { 00:19:35.459 "trtype": "TCP", 00:19:35.459 "adrfam": "IPv4", 00:19:35.459 "traddr": "10.0.0.2", 00:19:35.459 "trsvcid": "4420" 00:19:35.459 }, 00:19:35.459 "peer_address": { 00:19:35.459 "trtype": "TCP", 00:19:35.459 "adrfam": "IPv4", 00:19:35.459 "traddr": "10.0.0.1", 00:19:35.459 "trsvcid": "50996" 00:19:35.459 }, 00:19:35.459 "auth": { 00:19:35.459 "state": "completed", 00:19:35.459 "digest": "sha512", 00:19:35.459 "dhgroup": "ffdhe8192" 00:19:35.459 } 00:19:35.460 } 00:19:35.460 ]' 00:19:35.460 16:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.460 16:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.460 16:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.460 16:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:35.460 16:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.460 16:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.460 16:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.460 16:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.721 16:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:19:36.291 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.291 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.291 16:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.291 16:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.291 16:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.291 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.291 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:36.291 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:36.552 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:36.552 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.552 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.552 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:36.552 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:36.552 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.552 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.552 16:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.552 16:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.552 16:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.552 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.552 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.123 00:19:37.123 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.123 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.123 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.385 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.385 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.385 16:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.385 16:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.385 16:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.385 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.385 { 00:19:37.385 "cntlid": 139, 00:19:37.385 "qid": 0, 00:19:37.385 "state": "enabled", 00:19:37.385 "thread": "nvmf_tgt_poll_group_000", 00:19:37.385 "listen_address": { 00:19:37.385 "trtype": "TCP", 00:19:37.385 "adrfam": "IPv4", 00:19:37.385 "traddr": "10.0.0.2", 00:19:37.385 "trsvcid": "4420" 00:19:37.385 }, 00:19:37.385 "peer_address": { 00:19:37.385 "trtype": "TCP", 00:19:37.385 "adrfam": "IPv4", 00:19:37.385 "traddr": "10.0.0.1", 00:19:37.385 "trsvcid": "51034" 00:19:37.385 }, 00:19:37.385 "auth": { 00:19:37.385 "state": "completed", 00:19:37.385 "digest": "sha512", 00:19:37.385 "dhgroup": "ffdhe8192" 00:19:37.385 } 00:19:37.385 } 00:19:37.385 ]' 00:19:37.385 16:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.385 16:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.385 16:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.385 16:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:37.385 16:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.385 16:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.385 16:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.385 16:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.645 16:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Nzk3MGM0ZGFjYWRlNTIyMTFmZWMxMDhkNzkwZWIwYWOuEthg: --dhchap-ctrl-secret DHHC-1:02:MzI1ZjJmNWJiZTUwMDQ1MGU2NDk5NmJiZjViZGZiNmZmMjVlMmI3ZjkyOGEzZmM3xvx5zA==: 00:19:38.216 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.216 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.216 16:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.216 16:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.216 16:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.216 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.216 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:38.216 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:38.476 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:38.476 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.476 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:38.476 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:38.476 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:38.476 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.476 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.476 16:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.476 16:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.476 16:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.476 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.476 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.047 00:19:39.047 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.047 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.047 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.307 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.308 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.308 16:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.308 16:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.308 16:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.308 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.308 { 00:19:39.308 "cntlid": 141, 00:19:39.308 "qid": 0, 00:19:39.308 "state": "enabled", 00:19:39.308 "thread": "nvmf_tgt_poll_group_000", 00:19:39.308 "listen_address": { 00:19:39.308 "trtype": "TCP", 00:19:39.308 "adrfam": "IPv4", 00:19:39.308 "traddr": "10.0.0.2", 00:19:39.308 "trsvcid": "4420" 00:19:39.308 }, 00:19:39.308 "peer_address": { 00:19:39.308 "trtype": "TCP", 00:19:39.308 "adrfam": "IPv4", 00:19:39.308 "traddr": "10.0.0.1", 00:19:39.308 "trsvcid": "51056" 00:19:39.308 }, 00:19:39.308 "auth": { 00:19:39.308 "state": "completed", 00:19:39.308 "digest": "sha512", 00:19:39.308 "dhgroup": "ffdhe8192" 00:19:39.308 } 00:19:39.308 } 00:19:39.308 ]' 00:19:39.308 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.308 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.308 16:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.308 16:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:39.308 16:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.308 16:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.308 16:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.308 16:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.569 16:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZjZkMWQ3NGI2NTkxYzQ5ZTNhOTJiN2NlM2EwNDNkMzU2NzRkOWVjY2IwOTdiOTRjz3qt5A==: --dhchap-ctrl-secret DHHC-1:01:MzA0Nzc2YTI0NTg0ZjcyZGFkYTZlY2NkMmE5MTdiOTi8reIF: 00:19:40.141 16:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.141 16:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.141 16:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.141 16:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.401 16:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.401 16:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.401 16:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.401 16:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.401 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:40.401 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.401 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:40.401 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:40.401 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:40.401 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.401 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:40.401 16:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.401 16:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.401 16:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.401 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.401 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.972 00:19:40.972 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.972 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.972 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.233 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.233 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.233 16:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.233 16:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.233 16:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.233 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.233 { 00:19:41.233 "cntlid": 143, 00:19:41.233 "qid": 0, 00:19:41.233 "state": "enabled", 00:19:41.233 "thread": "nvmf_tgt_poll_group_000", 00:19:41.233 "listen_address": { 00:19:41.233 "trtype": "TCP", 00:19:41.233 "adrfam": "IPv4", 00:19:41.233 "traddr": "10.0.0.2", 00:19:41.233 "trsvcid": "4420" 00:19:41.233 }, 00:19:41.233 "peer_address": { 00:19:41.233 "trtype": "TCP", 00:19:41.233 "adrfam": "IPv4", 00:19:41.233 "traddr": "10.0.0.1", 00:19:41.233 "trsvcid": "51086" 00:19:41.233 }, 00:19:41.233 "auth": { 00:19:41.233 "state": "completed", 00:19:41.233 "digest": "sha512", 00:19:41.233 "dhgroup": "ffdhe8192" 00:19:41.233 } 00:19:41.233 } 00:19:41.233 ]' 00:19:41.233 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.233 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.233 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.233 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.233 16:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.233 16:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.233 16:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.233 16:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.493 16:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:19:42.065 16:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.065 16:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:42.065 16:11:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.065 16:11:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.325 16:11:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.325 16:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:42.325 16:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:42.325 16:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:42.325 16:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:42.325 16:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:42.325 16:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:42.325 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:42.325 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.325 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:42.325 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:42.325 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.325 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.325 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.325 16:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.325 16:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.325 16:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.325 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.325 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.897 00:19:42.897 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.897 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.897 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.158 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.158 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.158 16:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.158 16:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.158 16:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.158 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.158 { 00:19:43.158 "cntlid": 145, 00:19:43.158 "qid": 0, 00:19:43.158 "state": "enabled", 00:19:43.158 "thread": "nvmf_tgt_poll_group_000", 00:19:43.158 "listen_address": { 00:19:43.158 "trtype": "TCP", 00:19:43.158 "adrfam": "IPv4", 00:19:43.158 "traddr": "10.0.0.2", 00:19:43.158 "trsvcid": "4420" 00:19:43.158 }, 00:19:43.158 "peer_address": { 00:19:43.158 "trtype": "TCP", 00:19:43.158 "adrfam": "IPv4", 00:19:43.158 "traddr": "10.0.0.1", 00:19:43.158 "trsvcid": "49494" 00:19:43.158 }, 00:19:43.158 "auth": { 00:19:43.158 "state": "completed", 00:19:43.158 "digest": "sha512", 00:19:43.158 "dhgroup": "ffdhe8192" 00:19:43.158 } 00:19:43.158 } 00:19:43.158 ]' 00:19:43.158 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.158 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.158 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.158 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.158 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.158 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.158 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.158 16:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.419 16:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ODkxNGYzYjRlYWVkYzI1YTIyMDkzNjNjNmE4MTFkMzZiZjc2YWRkZDUzNWU5NWJlAOylMg==: --dhchap-ctrl-secret DHHC-1:03:OThjM2QwODYwZTU2ZWI1MTgxNjM2MjdhZTFiOWM4NjFkNzAwNzg5MjEyMWMwNTQ0NWY4MjFhMjE4OGQyYjMwOPhTa7Y=: 00:19:43.991 16:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.991 16:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.251 16:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.251 16:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.251 16:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.251 16:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:44.251 16:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.251 16:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.251 16:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.251 16:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:44.251 16:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:44.251 16:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:44.251 16:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:44.251 16:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:44.251 16:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:44.251 16:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:44.251 16:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:44.251 16:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:44.511 request: 00:19:44.511 { 00:19:44.511 "name": "nvme0", 00:19:44.511 "trtype": "tcp", 00:19:44.511 "traddr": "10.0.0.2", 00:19:44.511 "adrfam": "ipv4", 00:19:44.511 "trsvcid": "4420", 00:19:44.511 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:44.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:44.511 "prchk_reftag": false, 00:19:44.511 "prchk_guard": false, 00:19:44.511 "hdgst": false, 00:19:44.511 "ddgst": false, 00:19:44.511 "dhchap_key": "key2", 00:19:44.511 "method": "bdev_nvme_attach_controller", 00:19:44.511 "req_id": 1 00:19:44.511 } 00:19:44.511 Got JSON-RPC error response 00:19:44.511 response: 00:19:44.511 { 00:19:44.511 "code": -5, 00:19:44.511 "message": "Input/output error" 00:19:44.511 } 00:19:44.511 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:44.511 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:44.511 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:44.511 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:44.511 16:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.511 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.769 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.769 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.769 16:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.769 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.769 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.769 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.769 16:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.769 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:44.769 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.769 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:44.769 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:44.769 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:44.769 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:44.769 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.770 16:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:45.029 request: 00:19:45.029 { 00:19:45.029 "name": "nvme0", 00:19:45.029 "trtype": "tcp", 00:19:45.029 "traddr": "10.0.0.2", 00:19:45.029 "adrfam": "ipv4", 00:19:45.029 "trsvcid": "4420", 00:19:45.029 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:45.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:45.029 "prchk_reftag": false, 00:19:45.029 "prchk_guard": false, 00:19:45.029 "hdgst": false, 00:19:45.029 "ddgst": false, 00:19:45.029 "dhchap_key": "key1", 00:19:45.029 "dhchap_ctrlr_key": "ckey2", 00:19:45.029 "method": "bdev_nvme_attach_controller", 00:19:45.029 "req_id": 1 00:19:45.029 } 00:19:45.029 Got JSON-RPC error response 00:19:45.029 response: 00:19:45.029 { 00:19:45.029 "code": -5, 00:19:45.029 "message": "Input/output error" 00:19:45.029 } 00:19:45.029 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:45.029 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:45.029 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:45.029 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:45.029 16:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.029 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.029 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.029 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.029 16:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:45.029 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.029 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.289 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.289 16:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.289 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:45.289 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.289 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:45.289 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:45.289 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:45.289 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:45.289 16:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.289 16:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.550 request: 00:19:45.550 { 00:19:45.550 "name": "nvme0", 00:19:45.550 "trtype": "tcp", 00:19:45.550 "traddr": "10.0.0.2", 00:19:45.550 "adrfam": "ipv4", 00:19:45.550 "trsvcid": "4420", 00:19:45.550 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:45.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:45.550 "prchk_reftag": false, 00:19:45.550 "prchk_guard": false, 00:19:45.550 "hdgst": false, 00:19:45.550 "ddgst": false, 00:19:45.550 "dhchap_key": "key1", 00:19:45.550 "dhchap_ctrlr_key": "ckey1", 00:19:45.550 "method": "bdev_nvme_attach_controller", 00:19:45.550 "req_id": 1 00:19:45.550 } 00:19:45.550 Got JSON-RPC error response 00:19:45.550 response: 00:19:45.550 { 00:19:45.550 "code": -5, 00:19:45.550 "message": "Input/output error" 00:19:45.550 } 00:19:45.550 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:45.550 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:45.550 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:45.550 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:45.550 16:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.550 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.550 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.550 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.550 16:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2281698 00:19:45.550 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2281698 ']' 00:19:45.550 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2281698 00:19:45.550 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:45.550 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.550 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2281698 00:19:45.809 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:45.809 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:45.809 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2281698' 00:19:45.809 killing process with pid 2281698 00:19:45.809 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2281698 00:19:45.809 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2281698 00:19:45.809 16:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:45.809 16:11:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:45.809 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:45.809 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.809 16:11:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2307888 00:19:45.809 16:11:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2307888 00:19:45.810 16:11:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:45.810 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2307888 ']' 00:19:45.810 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.810 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.810 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.810 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.810 16:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2307888 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2307888 ']' 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.750 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.013 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.013 16:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:47.013 16:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.013 16:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:47.013 16:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:47.013 16:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:47.013 16:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.013 16:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:47.013 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.013 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.013 16:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.013 16:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.013 16:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.655 00:19:47.655 16:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.655 16:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.655 16:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.655 16:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.655 16:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.655 16:11:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.655 16:11:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.655 16:11:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.655 16:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.655 { 00:19:47.655 "cntlid": 1, 00:19:47.655 "qid": 0, 00:19:47.655 "state": "enabled", 00:19:47.655 "thread": "nvmf_tgt_poll_group_000", 00:19:47.655 "listen_address": { 00:19:47.655 "trtype": "TCP", 00:19:47.655 "adrfam": "IPv4", 00:19:47.655 "traddr": "10.0.0.2", 00:19:47.655 "trsvcid": "4420" 00:19:47.655 }, 00:19:47.655 "peer_address": { 00:19:47.655 "trtype": "TCP", 00:19:47.655 "adrfam": "IPv4", 00:19:47.655 "traddr": "10.0.0.1", 00:19:47.655 "trsvcid": "49554" 00:19:47.655 }, 00:19:47.655 "auth": { 00:19:47.655 "state": "completed", 00:19:47.655 "digest": "sha512", 00:19:47.655 "dhgroup": "ffdhe8192" 00:19:47.655 } 00:19:47.655 } 00:19:47.655 ]' 00:19:47.655 16:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.655 16:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.655 16:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.655 16:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:47.655 16:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.916 16:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.916 16:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.916 16:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.916 16:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ1Y2Y0MDc5Mzc3MTgzNWYyZTZmOTQ3ZmZjMDBlZWZiNTllMGMyY2U1YWYzZjVhYmQ0YjRlMDUxM2I4NzNkMSKmw9Y=: 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.856 16:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.117 request: 00:19:49.117 { 00:19:49.117 "name": "nvme0", 00:19:49.117 "trtype": "tcp", 00:19:49.117 "traddr": "10.0.0.2", 00:19:49.117 "adrfam": "ipv4", 00:19:49.117 "trsvcid": "4420", 00:19:49.117 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:49.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:49.117 "prchk_reftag": false, 00:19:49.117 "prchk_guard": false, 00:19:49.117 "hdgst": false, 00:19:49.117 "ddgst": false, 00:19:49.117 "dhchap_key": "key3", 00:19:49.117 "method": "bdev_nvme_attach_controller", 00:19:49.117 "req_id": 1 00:19:49.117 } 00:19:49.117 Got JSON-RPC error response 00:19:49.117 response: 00:19:49.117 { 00:19:49.117 "code": -5, 00:19:49.117 "message": "Input/output error" 00:19:49.117 } 00:19:49.117 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:49.117 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:49.117 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:49.117 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:49.117 16:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:49.117 16:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:49.117 16:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:49.117 16:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:49.117 16:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.117 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:49.117 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.117 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:49.117 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:49.117 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:49.117 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:49.117 16:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.117 16:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.377 request: 00:19:49.377 { 00:19:49.377 "name": "nvme0", 00:19:49.377 "trtype": "tcp", 00:19:49.377 "traddr": "10.0.0.2", 00:19:49.377 "adrfam": "ipv4", 00:19:49.377 "trsvcid": "4420", 00:19:49.377 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:49.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:49.377 "prchk_reftag": false, 00:19:49.377 "prchk_guard": false, 00:19:49.377 "hdgst": false, 00:19:49.377 "ddgst": false, 00:19:49.377 "dhchap_key": "key3", 00:19:49.377 "method": "bdev_nvme_attach_controller", 00:19:49.377 "req_id": 1 00:19:49.377 } 00:19:49.377 Got JSON-RPC error response 00:19:49.377 response: 00:19:49.377 { 00:19:49.377 "code": -5, 00:19:49.377 "message": "Input/output error" 00:19:49.377 } 00:19:49.377 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:49.377 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:49.377 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:49.377 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:49.377 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:49.377 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:49.377 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:49.377 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:49.377 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:49.377 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:49.638 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.638 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.638 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.638 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.638 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.638 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.638 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.638 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.638 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:49.638 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:49.638 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:49.638 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:49.638 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:49.638 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:49.638 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:49.638 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:49.639 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:49.639 request: 00:19:49.639 { 00:19:49.639 "name": "nvme0", 00:19:49.639 "trtype": "tcp", 00:19:49.639 "traddr": "10.0.0.2", 00:19:49.639 "adrfam": "ipv4", 00:19:49.639 "trsvcid": "4420", 00:19:49.639 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:49.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:49.639 "prchk_reftag": false, 00:19:49.639 "prchk_guard": false, 00:19:49.639 "hdgst": false, 00:19:49.639 "ddgst": false, 00:19:49.639 "dhchap_key": "key0", 00:19:49.639 "dhchap_ctrlr_key": "key1", 00:19:49.639 "method": "bdev_nvme_attach_controller", 00:19:49.639 "req_id": 1 00:19:49.639 } 00:19:49.639 Got JSON-RPC error response 00:19:49.639 response: 00:19:49.639 { 00:19:49.639 "code": -5, 00:19:49.639 "message": "Input/output error" 00:19:49.639 } 00:19:49.639 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:49.639 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:49.639 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:49.639 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:49.639 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:49.639 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:49.898 00:19:49.898 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:49.898 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:49.898 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.157 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.157 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.157 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.157 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:50.157 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:50.157 16:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2281746 00:19:50.157 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2281746 ']' 00:19:50.157 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2281746 00:19:50.416 16:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:50.416 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.416 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2281746 00:19:50.416 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:50.416 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:50.416 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2281746' 00:19:50.416 killing process with pid 2281746 00:19:50.416 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2281746 00:19:50.416 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2281746 00:19:50.416 16:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:50.416 16:11:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:50.416 16:11:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:50.416 16:11:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:50.416 16:11:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:50.416 16:11:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:50.416 16:11:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:50.416 rmmod nvme_tcp 00:19:50.676 rmmod nvme_fabrics 00:19:50.676 rmmod nvme_keyring 00:19:50.676 16:11:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:50.676 16:11:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:50.676 16:11:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:50.676 16:11:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2307888 ']' 00:19:50.676 16:11:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2307888 00:19:50.676 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2307888 ']' 00:19:50.676 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2307888 00:19:50.676 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:50.676 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.677 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2307888 00:19:50.677 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:50.677 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:50.677 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2307888' 00:19:50.677 killing process with pid 2307888 00:19:50.677 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2307888 00:19:50.677 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2307888 00:19:50.936 16:11:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:50.936 16:11:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:50.936 16:11:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:50.936 16:11:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:50.936 16:11:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:50.936 16:11:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.936 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.936 16:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.846 16:11:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:52.846 16:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.wNp /tmp/spdk.key-sha256.x0p /tmp/spdk.key-sha384.TrO /tmp/spdk.key-sha512.k6x /tmp/spdk.key-sha512.FTR /tmp/spdk.key-sha384.Uy8 /tmp/spdk.key-sha256.K82 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:52.846 00:19:52.846 real 2m23.467s 00:19:52.846 user 5m18.668s 00:19:52.846 sys 0m21.151s 00:19:52.847 16:11:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:52.847 16:11:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.847 ************************************ 00:19:52.847 END TEST nvmf_auth_target 00:19:52.847 ************************************ 00:19:52.847 16:11:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:52.847 16:11:28 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:52.847 16:11:28 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:52.847 16:11:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:52.847 16:11:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.847 16:11:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:52.847 ************************************ 00:19:52.847 START TEST nvmf_bdevio_no_huge 00:19:52.847 ************************************ 00:19:52.847 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:53.108 * Looking for test storage... 00:19:53.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:53.108 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:53.108 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:53.108 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.108 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.108 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.108 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:53.109 16:11:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:01.256 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:01.256 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:01.256 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:01.256 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:01.256 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:01.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:20:01.257 00:20:01.257 --- 10.0.0.2 ping statistics --- 00:20:01.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.257 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:20:01.257 00:20:01.257 --- 10.0.0.1 ping statistics --- 00:20:01.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.257 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2313111 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2313111 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2313111 ']' 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:01.257 16:11:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.257 [2024-07-15 16:11:36.011693] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:20:01.257 [2024-07-15 16:11:36.011765] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:01.257 [2024-07-15 16:11:36.105557] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:01.257 [2024-07-15 16:11:36.213330] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.257 [2024-07-15 16:11:36.213384] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.257 [2024-07-15 16:11:36.213392] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.257 [2024-07-15 16:11:36.213399] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.257 [2024-07-15 16:11:36.213405] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.257 [2024-07-15 16:11:36.213566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:01.257 [2024-07-15 16:11:36.213725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:01.257 [2024-07-15 16:11:36.213883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:01.257 [2024-07-15 16:11:36.213883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.257 [2024-07-15 16:11:36.861165] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.257 Malloc0 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.257 [2024-07-15 16:11:36.914907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:01.257 { 00:20:01.257 "params": { 00:20:01.257 "name": "Nvme$subsystem", 00:20:01.257 "trtype": "$TEST_TRANSPORT", 00:20:01.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.257 "adrfam": "ipv4", 00:20:01.257 "trsvcid": "$NVMF_PORT", 00:20:01.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.257 "hdgst": ${hdgst:-false}, 00:20:01.257 "ddgst": ${ddgst:-false} 00:20:01.257 }, 00:20:01.257 "method": "bdev_nvme_attach_controller" 00:20:01.257 } 00:20:01.257 EOF 00:20:01.257 )") 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:01.257 16:11:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:01.257 "params": { 00:20:01.257 "name": "Nvme1", 00:20:01.257 "trtype": "tcp", 00:20:01.257 "traddr": "10.0.0.2", 00:20:01.257 "adrfam": "ipv4", 00:20:01.257 "trsvcid": "4420", 00:20:01.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.257 "hdgst": false, 00:20:01.257 "ddgst": false 00:20:01.257 }, 00:20:01.257 "method": "bdev_nvme_attach_controller" 00:20:01.257 }' 00:20:01.257 [2024-07-15 16:11:36.981918] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:20:01.257 [2024-07-15 16:11:36.981984] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2313183 ] 00:20:01.257 [2024-07-15 16:11:37.050291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:01.519 [2024-07-15 16:11:37.147496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.519 [2024-07-15 16:11:37.147640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.519 [2024-07-15 16:11:37.147643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.519 I/O targets: 00:20:01.519 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:01.519 00:20:01.519 00:20:01.519 CUnit - A unit testing framework for C - Version 2.1-3 00:20:01.519 http://cunit.sourceforge.net/ 00:20:01.519 00:20:01.519 00:20:01.519 Suite: bdevio tests on: Nvme1n1 00:20:01.780 Test: blockdev write read block ...passed 00:20:01.780 Test: blockdev write zeroes read block ...passed 00:20:01.780 Test: blockdev write zeroes read no split ...passed 00:20:01.780 Test: blockdev write zeroes read split ...passed 00:20:01.780 Test: blockdev write zeroes read split partial ...passed 00:20:01.780 Test: blockdev reset ...[2024-07-15 16:11:37.567566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:01.780 [2024-07-15 16:11:37.567629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a93c10 (9): Bad file descriptor 00:20:01.780 [2024-07-15 16:11:37.584253] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:01.780 passed 00:20:01.780 Test: blockdev write read 8 blocks ...passed 00:20:01.780 Test: blockdev write read size > 128k ...passed 00:20:01.780 Test: blockdev write read invalid size ...passed 00:20:02.042 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:02.042 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:02.042 Test: blockdev write read max offset ...passed 00:20:02.042 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:02.042 Test: blockdev writev readv 8 blocks ...passed 00:20:02.042 Test: blockdev writev readv 30 x 1block ...passed 00:20:02.042 Test: blockdev writev readv block ...passed 00:20:02.042 Test: blockdev writev readv size > 128k ...passed 00:20:02.042 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:02.042 Test: blockdev comparev and writev ...[2024-07-15 16:11:37.813371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:02.042 [2024-07-15 16:11:37.813400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.042 [2024-07-15 16:11:37.813411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:02.042 [2024-07-15 16:11:37.813416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:02.042 [2024-07-15 16:11:37.813977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:02.042 [2024-07-15 16:11:37.813985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:02.042 [2024-07-15 16:11:37.813995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:02.042 [2024-07-15 16:11:37.814000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:02.042 [2024-07-15 16:11:37.814560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:02.042 [2024-07-15 16:11:37.814569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:02.042 [2024-07-15 16:11:37.814578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:02.042 [2024-07-15 16:11:37.814583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:02.042 [2024-07-15 16:11:37.815156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:02.042 [2024-07-15 16:11:37.815164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:02.042 [2024-07-15 16:11:37.815173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:02.042 [2024-07-15 16:11:37.815178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:02.042 passed 00:20:02.303 Test: blockdev nvme passthru rw ...passed 00:20:02.303 Test: blockdev nvme passthru vendor specific ...[2024-07-15 16:11:37.900175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:02.303 [2024-07-15 16:11:37.900189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:02.303 [2024-07-15 16:11:37.900605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:02.303 [2024-07-15 16:11:37.900613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:02.303 [2024-07-15 16:11:37.901038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:02.303 [2024-07-15 16:11:37.901046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:02.303 [2024-07-15 16:11:37.901487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:02.303 [2024-07-15 16:11:37.901495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:02.303 passed 00:20:02.303 Test: blockdev nvme admin passthru ...passed 00:20:02.303 Test: blockdev copy ...passed 00:20:02.303 00:20:02.303 Run Summary: Type Total Ran Passed Failed Inactive 00:20:02.303 suites 1 1 n/a 0 0 00:20:02.303 tests 23 23 23 0 0 00:20:02.303 asserts 152 152 152 0 n/a 00:20:02.303 00:20:02.303 Elapsed time = 1.256 seconds 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:02.564 rmmod nvme_tcp 00:20:02.564 rmmod nvme_fabrics 00:20:02.564 rmmod nvme_keyring 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2313111 ']' 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2313111 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2313111 ']' 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2313111 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2313111 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2313111' 00:20:02.564 killing process with pid 2313111 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2313111 00:20:02.564 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2313111 00:20:02.825 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:02.825 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:02.825 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:02.825 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:02.825 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:02.825 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.825 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.825 16:11:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.372 16:11:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:05.372 00:20:05.372 real 0m12.038s 00:20:05.372 user 0m13.754s 00:20:05.372 sys 0m6.253s 00:20:05.372 16:11:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:05.372 16:11:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.372 ************************************ 00:20:05.372 END TEST nvmf_bdevio_no_huge 00:20:05.372 ************************************ 00:20:05.372 16:11:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:05.372 16:11:40 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:05.372 16:11:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:05.372 16:11:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:05.372 16:11:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:05.372 ************************************ 00:20:05.372 START TEST nvmf_tls 00:20:05.372 ************************************ 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:05.372 * Looking for test storage... 00:20:05.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:05.372 16:11:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:11.963 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:11.963 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:11.963 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:11.963 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:11.963 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:12.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:20:12.262 00:20:12.262 --- 10.0.0.2 ping statistics --- 00:20:12.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.262 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:12.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.385 ms 00:20:12.262 00:20:12.262 --- 10.0.0.1 ping statistics --- 00:20:12.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.262 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2317640 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2317640 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2317640 ']' 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.262 16:11:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.262 [2024-07-15 16:11:48.039801] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:20:12.262 [2024-07-15 16:11:48.039862] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.262 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.521 [2024-07-15 16:11:48.128659] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.521 [2024-07-15 16:11:48.225922] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.521 [2024-07-15 16:11:48.225980] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.521 [2024-07-15 16:11:48.225989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.521 [2024-07-15 16:11:48.225996] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.521 [2024-07-15 16:11:48.226002] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.521 [2024-07-15 16:11:48.226028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.093 16:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.093 16:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:13.093 16:11:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:13.093 16:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:13.093 16:11:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.093 16:11:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.093 16:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:13.093 16:11:48 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:13.354 true 00:20:13.354 16:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:13.354 16:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:13.615 16:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:13.615 16:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:13.615 16:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:13.615 16:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:13.615 16:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:13.876 16:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:13.877 16:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:13.877 16:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:14.138 16:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:14.138 16:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:14.138 16:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:14.138 16:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:14.138 16:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:14.138 16:11:49 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:14.400 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:14.400 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:14.400 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:14.662 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:14.662 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:14.662 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:14.662 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:14.662 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:14.923 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:14.923 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:14.923 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:14.923 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:14.923 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:14.923 16:11:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:14.923 16:11:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:14.923 16:11:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:14.923 16:11:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:14.923 16:11:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:14.923 16:11:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.2wphDcBoYa 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.VQu3SRxPr2 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.2wphDcBoYa 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.VQu3SRxPr2 00:20:15.184 16:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:15.184 16:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:15.446 16:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.2wphDcBoYa 00:20:15.446 16:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.2wphDcBoYa 00:20:15.446 16:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:15.707 [2024-07-15 16:11:51.424545] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.707 16:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:15.968 16:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:15.968 [2024-07-15 16:11:51.741286] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:15.968 [2024-07-15 16:11:51.741474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.968 16:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:16.229 malloc0 00:20:16.229 16:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:16.229 16:11:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2wphDcBoYa 00:20:16.490 [2024-07-15 16:11:52.196404] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:16.490 16:11:52 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.2wphDcBoYa 00:20:16.490 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.488 Initializing NVMe Controllers 00:20:26.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:26.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:26.488 Initialization complete. Launching workers. 00:20:26.488 ======================================================== 00:20:26.488 Latency(us) 00:20:26.488 Device Information : IOPS MiB/s Average min max 00:20:26.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18990.49 74.18 3370.14 957.00 7475.24 00:20:26.488 ======================================================== 00:20:26.488 Total : 18990.49 74.18 3370.14 957.00 7475.24 00:20:26.488 00:20:26.488 16:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2wphDcBoYa 00:20:26.488 16:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:26.488 16:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:26.488 16:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:26.488 16:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2wphDcBoYa' 00:20:26.488 16:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:26.488 16:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2320669 00:20:26.488 16:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:26.488 16:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2320669 /var/tmp/bdevperf.sock 00:20:26.488 16:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:26.488 16:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2320669 ']' 00:20:26.488 16:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.488 16:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.488 16:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.488 16:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.488 16:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.749 [2024-07-15 16:12:02.361572] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:20:26.749 [2024-07-15 16:12:02.361624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320669 ] 00:20:26.749 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.749 [2024-07-15 16:12:02.410434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.749 [2024-07-15 16:12:02.463203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.318 16:12:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:27.318 16:12:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:27.318 16:12:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2wphDcBoYa 00:20:27.577 [2024-07-15 16:12:03.268111] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.577 [2024-07-15 16:12:03.268167] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:27.577 TLSTESTn1 00:20:27.577 16:12:03 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:27.837 Running I/O for 10 seconds... 00:20:37.829 00:20:37.829 Latency(us) 00:20:37.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.829 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:37.829 Verification LBA range: start 0x0 length 0x2000 00:20:37.829 TLSTESTn1 : 10.03 3087.21 12.06 0.00 0.00 41389.74 6280.53 76021.76 00:20:37.829 =================================================================================================================== 00:20:37.829 Total : 3087.21 12.06 0.00 0.00 41389.74 6280.53 76021.76 00:20:37.829 0 00:20:37.829 16:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:37.829 16:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2320669 00:20:37.829 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2320669 ']' 00:20:37.829 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2320669 00:20:37.829 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:37.829 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:37.829 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2320669 00:20:37.829 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:37.829 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:37.829 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2320669' 00:20:37.829 killing process with pid 2320669 00:20:37.829 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2320669 00:20:37.829 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.829 00:20:37.829 Latency(us) 00:20:37.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.829 =================================================================================================================== 00:20:37.829 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.829 [2024-07-15 16:12:13.583430] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:37.829 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2320669 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VQu3SRxPr2 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VQu3SRxPr2 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VQu3SRxPr2 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.VQu3SRxPr2' 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2323165 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2323165 /var/tmp/bdevperf.sock 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2323165 ']' 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.091 16:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.091 [2024-07-15 16:12:13.745149] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:20:38.091 [2024-07-15 16:12:13.745205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2323165 ] 00:20:38.091 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.091 [2024-07-15 16:12:13.794850] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.091 [2024-07-15 16:12:13.846741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.033 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.033 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:39.033 16:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VQu3SRxPr2 00:20:39.033 [2024-07-15 16:12:14.651612] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:39.033 [2024-07-15 16:12:14.651673] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:39.033 [2024-07-15 16:12:14.656130] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:39.033 [2024-07-15 16:12:14.656754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12eeec0 (107): Transport endpoint is not connected 00:20:39.033 [2024-07-15 16:12:14.657749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12eeec0 (9): Bad file descriptor 00:20:39.033 [2024-07-15 16:12:14.658751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.033 [2024-07-15 16:12:14.658759] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:39.033 [2024-07-15 16:12:14.658766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.033 request: 00:20:39.033 { 00:20:39.034 "name": "TLSTEST", 00:20:39.034 "trtype": "tcp", 00:20:39.034 "traddr": "10.0.0.2", 00:20:39.034 "adrfam": "ipv4", 00:20:39.034 "trsvcid": "4420", 00:20:39.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.034 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.034 "prchk_reftag": false, 00:20:39.034 "prchk_guard": false, 00:20:39.034 "hdgst": false, 00:20:39.034 "ddgst": false, 00:20:39.034 "psk": "/tmp/tmp.VQu3SRxPr2", 00:20:39.034 "method": "bdev_nvme_attach_controller", 00:20:39.034 "req_id": 1 00:20:39.034 } 00:20:39.034 Got JSON-RPC error response 00:20:39.034 response: 00:20:39.034 { 00:20:39.034 "code": -5, 00:20:39.034 "message": "Input/output error" 00:20:39.034 } 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2323165 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2323165 ']' 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2323165 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2323165 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2323165' 00:20:39.034 killing process with pid 2323165 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2323165 00:20:39.034 Received shutdown signal, test time was about 10.000000 seconds 00:20:39.034 00:20:39.034 Latency(us) 00:20:39.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.034 =================================================================================================================== 00:20:39.034 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:39.034 [2024-07-15 16:12:14.743852] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2323165 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2wphDcBoYa 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2wphDcBoYa 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2wphDcBoYa 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2wphDcBoYa' 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2323493 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2323493 /var/tmp/bdevperf.sock 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2323493 ']' 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:39.034 16:12:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.295 [2024-07-15 16:12:14.899190] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:20:39.295 [2024-07-15 16:12:14.899244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2323493 ] 00:20:39.295 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.295 [2024-07-15 16:12:14.949301] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.295 [2024-07-15 16:12:14.999550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.878 16:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.878 16:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:39.878 16:12:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.2wphDcBoYa 00:20:40.181 [2024-07-15 16:12:15.812375] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.181 [2024-07-15 16:12:15.812437] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:40.181 [2024-07-15 16:12:15.818044] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:40.181 [2024-07-15 16:12:15.818063] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:40.181 [2024-07-15 16:12:15.818083] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:40.181 [2024-07-15 16:12:15.818377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221cec0 (107): Transport endpoint is not connected 00:20:40.181 [2024-07-15 16:12:15.819371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221cec0 (9): Bad file descriptor 00:20:40.181 [2024-07-15 16:12:15.820373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.181 [2024-07-15 16:12:15.820381] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:40.181 [2024-07-15 16:12:15.820389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.181 request: 00:20:40.181 { 00:20:40.181 "name": "TLSTEST", 00:20:40.181 "trtype": "tcp", 00:20:40.181 "traddr": "10.0.0.2", 00:20:40.181 "adrfam": "ipv4", 00:20:40.181 "trsvcid": "4420", 00:20:40.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.181 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:40.181 "prchk_reftag": false, 00:20:40.181 "prchk_guard": false, 00:20:40.181 "hdgst": false, 00:20:40.181 "ddgst": false, 00:20:40.181 "psk": "/tmp/tmp.2wphDcBoYa", 00:20:40.181 "method": "bdev_nvme_attach_controller", 00:20:40.181 "req_id": 1 00:20:40.181 } 00:20:40.181 Got JSON-RPC error response 00:20:40.181 response: 00:20:40.181 { 00:20:40.181 "code": -5, 00:20:40.181 "message": "Input/output error" 00:20:40.181 } 00:20:40.181 16:12:15 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2323493 00:20:40.181 16:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2323493 ']' 00:20:40.181 16:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2323493 00:20:40.181 16:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:40.181 16:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:40.181 16:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2323493 00:20:40.181 16:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:40.181 16:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:40.181 16:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2323493' 00:20:40.181 killing process with pid 2323493 00:20:40.181 16:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2323493 00:20:40.181 Received shutdown signal, test time was about 10.000000 seconds 00:20:40.181 00:20:40.181 Latency(us) 00:20:40.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.181 =================================================================================================================== 00:20:40.181 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:40.181 [2024-07-15 16:12:15.908804] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:40.181 16:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2323493 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2wphDcBoYa 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2wphDcBoYa 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2wphDcBoYa 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2wphDcBoYa' 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2323789 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2323789 /var/tmp/bdevperf.sock 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2323789 ']' 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.441 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.441 [2024-07-15 16:12:16.064891] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:20:40.441 [2024-07-15 16:12:16.064946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2323789 ] 00:20:40.441 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.441 [2024-07-15 16:12:16.114807] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.441 [2024-07-15 16:12:16.166652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.011 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.011 16:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:41.011 16:12:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2wphDcBoYa 00:20:41.272 [2024-07-15 16:12:16.975478] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.272 [2024-07-15 16:12:16.975538] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:41.272 [2024-07-15 16:12:16.980875] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:41.272 [2024-07-15 16:12:16.980894] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:41.272 [2024-07-15 16:12:16.980914] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:41.272 [2024-07-15 16:12:16.981641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7fec0 (107): Transport endpoint is not connected 00:20:41.272 [2024-07-15 16:12:16.982637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7fec0 (9): Bad file descriptor 00:20:41.272 [2024-07-15 16:12:16.983638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:41.272 [2024-07-15 16:12:16.983646] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:41.272 [2024-07-15 16:12:16.983653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:41.272 request: 00:20:41.272 { 00:20:41.272 "name": "TLSTEST", 00:20:41.272 "trtype": "tcp", 00:20:41.272 "traddr": "10.0.0.2", 00:20:41.272 "adrfam": "ipv4", 00:20:41.272 "trsvcid": "4420", 00:20:41.272 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:41.272 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.272 "prchk_reftag": false, 00:20:41.272 "prchk_guard": false, 00:20:41.272 "hdgst": false, 00:20:41.272 "ddgst": false, 00:20:41.272 "psk": "/tmp/tmp.2wphDcBoYa", 00:20:41.272 "method": "bdev_nvme_attach_controller", 00:20:41.272 "req_id": 1 00:20:41.272 } 00:20:41.272 Got JSON-RPC error response 00:20:41.272 response: 00:20:41.272 { 00:20:41.272 "code": -5, 00:20:41.272 "message": "Input/output error" 00:20:41.272 } 00:20:41.272 16:12:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2323789 00:20:41.272 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2323789 ']' 00:20:41.272 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2323789 00:20:41.272 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:41.272 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:41.272 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2323789 00:20:41.272 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:41.272 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:41.272 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2323789' 00:20:41.272 killing process with pid 2323789 00:20:41.272 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2323789 00:20:41.272 Received shutdown signal, test time was about 10.000000 seconds 00:20:41.272 00:20:41.272 Latency(us) 00:20:41.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.272 =================================================================================================================== 00:20:41.272 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:41.272 [2024-07-15 16:12:17.070468] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:41.272 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2323789 00:20:41.533 16:12:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:41.533 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2323869 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2323869 /var/tmp/bdevperf.sock 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2323869 ']' 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.534 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.534 [2024-07-15 16:12:17.225390] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:20:41.534 [2024-07-15 16:12:17.225442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2323869 ] 00:20:41.534 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.534 [2024-07-15 16:12:17.275413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.534 [2024-07-15 16:12:17.326414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.476 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:42.476 16:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:42.476 16:12:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:42.476 [2024-07-15 16:12:18.134127] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:42.476 [2024-07-15 16:12:18.135954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee4a0 (9): Bad file descriptor 00:20:42.476 [2024-07-15 16:12:18.136954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:42.476 [2024-07-15 16:12:18.136963] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:42.476 [2024-07-15 16:12:18.136970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:42.476 request: 00:20:42.476 { 00:20:42.476 "name": "TLSTEST", 00:20:42.476 "trtype": "tcp", 00:20:42.476 "traddr": "10.0.0.2", 00:20:42.476 "adrfam": "ipv4", 00:20:42.476 "trsvcid": "4420", 00:20:42.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:42.476 "prchk_reftag": false, 00:20:42.476 "prchk_guard": false, 00:20:42.476 "hdgst": false, 00:20:42.476 "ddgst": false, 00:20:42.476 "method": "bdev_nvme_attach_controller", 00:20:42.476 "req_id": 1 00:20:42.476 } 00:20:42.476 Got JSON-RPC error response 00:20:42.476 response: 00:20:42.476 { 00:20:42.476 "code": -5, 00:20:42.476 "message": "Input/output error" 00:20:42.476 } 00:20:42.476 16:12:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2323869 00:20:42.476 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2323869 ']' 00:20:42.476 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2323869 00:20:42.476 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:42.476 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.476 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2323869 00:20:42.476 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:42.476 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:42.476 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2323869' 00:20:42.476 killing process with pid 2323869 00:20:42.476 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2323869 00:20:42.476 Received shutdown signal, test time was about 10.000000 seconds 00:20:42.476 00:20:42.476 Latency(us) 00:20:42.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.476 =================================================================================================================== 00:20:42.476 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:42.476 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2323869 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2317640 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2317640 ']' 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2317640 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2317640 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2317640' 00:20:42.737 killing process with pid 2317640 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2317640 00:20:42.737 [2024-07-15 16:12:18.381631] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2317640 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.WBGxBNxgOa 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.WBGxBNxgOa 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2324205 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2324205 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2324205 ']' 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:42.737 16:12:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.998 [2024-07-15 16:12:18.610170] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:20:42.998 [2024-07-15 16:12:18.610220] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.998 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.998 [2024-07-15 16:12:18.688196] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.998 [2024-07-15 16:12:18.740386] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.998 [2024-07-15 16:12:18.740420] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.998 [2024-07-15 16:12:18.740425] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.998 [2024-07-15 16:12:18.740431] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.998 [2024-07-15 16:12:18.740435] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.998 [2024-07-15 16:12:18.740457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.569 16:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.569 16:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:43.569 16:12:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:43.569 16:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:43.569 16:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.830 16:12:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.830 16:12:19 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.WBGxBNxgOa 00:20:43.830 16:12:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WBGxBNxgOa 00:20:43.830 16:12:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:43.830 [2024-07-15 16:12:19.590211] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.830 16:12:19 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:44.090 16:12:19 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:44.090 [2024-07-15 16:12:19.886931] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:44.090 [2024-07-15 16:12:19.887106] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.090 16:12:19 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:44.351 malloc0 00:20:44.351 16:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:44.612 16:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WBGxBNxgOa 00:20:44.612 [2024-07-15 16:12:20.346138] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:44.612 16:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WBGxBNxgOa 00:20:44.612 16:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:44.612 16:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:44.612 16:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:44.612 16:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WBGxBNxgOa' 00:20:44.612 16:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:44.612 16:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.612 16:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2324568 00:20:44.612 16:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.612 16:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2324568 /var/tmp/bdevperf.sock 00:20:44.612 16:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2324568 ']' 00:20:44.612 16:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.612 16:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.612 16:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.612 16:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.612 16:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.612 [2024-07-15 16:12:20.395516] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:20:44.612 [2024-07-15 16:12:20.395567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2324568 ] 00:20:44.612 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.612 [2024-07-15 16:12:20.445405] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.873 [2024-07-15 16:12:20.497105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.873 16:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.873 16:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:44.873 16:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WBGxBNxgOa 00:20:45.134 [2024-07-15 16:12:20.716344] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.134 [2024-07-15 16:12:20.716405] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:45.134 TLSTESTn1 00:20:45.134 16:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:45.134 Running I/O for 10 seconds... 00:20:57.360 00:20:57.360 Latency(us) 00:20:57.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.360 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:57.360 Verification LBA range: start 0x0 length 0x2000 00:20:57.360 TLSTESTn1 : 10.06 3824.09 14.94 0.00 0.00 33360.10 5898.24 68157.44 00:20:57.360 =================================================================================================================== 00:20:57.360 Total : 3824.09 14.94 0.00 0.00 33360.10 5898.24 68157.44 00:20:57.360 0 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2324568 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2324568 ']' 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2324568 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2324568 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2324568' 00:20:57.360 killing process with pid 2324568 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2324568 00:20:57.360 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.360 00:20:57.360 Latency(us) 00:20:57.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.360 =================================================================================================================== 00:20:57.360 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.360 [2024-07-15 16:12:31.064339] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2324568 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.WBGxBNxgOa 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WBGxBNxgOa 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WBGxBNxgOa 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WBGxBNxgOa 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WBGxBNxgOa' 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2326682 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2326682 /var/tmp/bdevperf.sock 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2326682 ']' 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.360 [2024-07-15 16:12:31.243250] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:20:57.360 [2024-07-15 16:12:31.243308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2326682 ] 00:20:57.360 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.360 [2024-07-15 16:12:31.292292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.360 [2024-07-15 16:12:31.344327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:57.360 16:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WBGxBNxgOa 00:20:57.360 [2024-07-15 16:12:32.136962] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:57.360 [2024-07-15 16:12:32.137002] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:57.360 [2024-07-15 16:12:32.137007] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.WBGxBNxgOa 00:20:57.360 request: 00:20:57.360 { 00:20:57.360 "name": "TLSTEST", 00:20:57.360 "trtype": "tcp", 00:20:57.360 "traddr": "10.0.0.2", 00:20:57.360 "adrfam": "ipv4", 00:20:57.360 "trsvcid": "4420", 00:20:57.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:57.361 "prchk_reftag": false, 00:20:57.361 "prchk_guard": false, 00:20:57.361 "hdgst": false, 00:20:57.361 "ddgst": false, 00:20:57.361 "psk": "/tmp/tmp.WBGxBNxgOa", 00:20:57.361 "method": "bdev_nvme_attach_controller", 00:20:57.361 "req_id": 1 00:20:57.361 } 00:20:57.361 Got JSON-RPC error response 00:20:57.361 response: 00:20:57.361 { 00:20:57.361 "code": -1, 00:20:57.361 "message": "Operation not permitted" 00:20:57.361 } 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2326682 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2326682 ']' 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2326682 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2326682 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2326682' 00:20:57.361 killing process with pid 2326682 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2326682 00:20:57.361 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.361 00:20:57.361 Latency(us) 00:20:57.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.361 =================================================================================================================== 00:20:57.361 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2326682 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2324205 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2324205 ']' 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2324205 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2324205 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2324205' 00:20:57.361 killing process with pid 2324205 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2324205 00:20:57.361 [2024-07-15 16:12:32.368631] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2324205 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2326925 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2326925 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2326925 ']' 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.361 16:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.361 [2024-07-15 16:12:32.558359] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:20:57.361 [2024-07-15 16:12:32.558413] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.361 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.361 [2024-07-15 16:12:32.640548] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.361 [2024-07-15 16:12:32.693899] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.361 [2024-07-15 16:12:32.693936] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.361 [2024-07-15 16:12:32.693941] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.361 [2024-07-15 16:12:32.693946] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.361 [2024-07-15 16:12:32.693950] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.361 [2024-07-15 16:12:32.693973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.622 16:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:57.622 16:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:57.622 16:12:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:57.622 16:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:57.622 16:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.622 16:12:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.622 16:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.WBGxBNxgOa 00:20:57.622 16:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:57.622 16:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.WBGxBNxgOa 00:20:57.622 16:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:57.622 16:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.622 16:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:57.622 16:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.622 16:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.WBGxBNxgOa 00:20:57.622 16:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WBGxBNxgOa 00:20:57.622 16:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:57.881 [2024-07-15 16:12:33.491648] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.881 16:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:57.881 16:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:58.143 [2024-07-15 16:12:33.772324] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:58.143 [2024-07-15 16:12:33.772498] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.143 16:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:58.143 malloc0 00:20:58.143 16:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:58.402 16:12:34 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WBGxBNxgOa 00:20:58.402 [2024-07-15 16:12:34.231141] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:58.403 [2024-07-15 16:12:34.231159] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:58.403 [2024-07-15 16:12:34.231179] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:58.403 request: 00:20:58.403 { 00:20:58.403 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.403 "host": "nqn.2016-06.io.spdk:host1", 00:20:58.403 "psk": "/tmp/tmp.WBGxBNxgOa", 00:20:58.403 "method": "nvmf_subsystem_add_host", 00:20:58.403 "req_id": 1 00:20:58.403 } 00:20:58.403 Got JSON-RPC error response 00:20:58.403 response: 00:20:58.403 { 00:20:58.403 "code": -32603, 00:20:58.403 "message": "Internal error" 00:20:58.403 } 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2326925 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2326925 ']' 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2326925 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2326925 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2326925' 00:20:58.664 killing process with pid 2326925 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2326925 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2326925 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.WBGxBNxgOa 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2327322 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2327322 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2327322 ']' 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.664 16:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.664 [2024-07-15 16:12:34.489762] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:20:58.664 [2024-07-15 16:12:34.489818] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.925 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.925 [2024-07-15 16:12:34.569529] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.925 [2024-07-15 16:12:34.623061] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.925 [2024-07-15 16:12:34.623093] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.926 [2024-07-15 16:12:34.623098] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.926 [2024-07-15 16:12:34.623102] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.926 [2024-07-15 16:12:34.623106] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.926 [2024-07-15 16:12:34.623128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.497 16:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.497 16:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:59.497 16:12:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:59.497 16:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:59.497 16:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.497 16:12:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.497 16:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.WBGxBNxgOa 00:20:59.497 16:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WBGxBNxgOa 00:20:59.497 16:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:59.758 [2024-07-15 16:12:35.432660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.758 16:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:00.019 16:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:00.019 [2024-07-15 16:12:35.741410] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:00.019 [2024-07-15 16:12:35.741580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.019 16:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:00.280 malloc0 00:21:00.280 16:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:00.280 16:12:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WBGxBNxgOa 00:21:00.541 [2024-07-15 16:12:36.168195] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:00.541 16:12:36 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:00.541 16:12:36 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2327705 00:21:00.541 16:12:36 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.541 16:12:36 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2327705 /var/tmp/bdevperf.sock 00:21:00.541 16:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2327705 ']' 00:21:00.541 16:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.541 16:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.541 16:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.541 16:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.541 16:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.541 [2024-07-15 16:12:36.214187] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:21:00.541 [2024-07-15 16:12:36.214237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2327705 ] 00:21:00.541 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.541 [2024-07-15 16:12:36.264116] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.541 [2024-07-15 16:12:36.316584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.803 16:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.803 16:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:00.803 16:12:36 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WBGxBNxgOa 00:21:00.803 [2024-07-15 16:12:36.536057] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.803 [2024-07-15 16:12:36.536125] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:00.803 TLSTESTn1 00:21:00.803 16:12:36 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:01.064 16:12:36 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:01.064 "subsystems": [ 00:21:01.064 { 00:21:01.064 "subsystem": "keyring", 00:21:01.064 "config": [] 00:21:01.064 }, 00:21:01.064 { 00:21:01.064 "subsystem": "iobuf", 00:21:01.064 "config": [ 00:21:01.064 { 00:21:01.064 "method": "iobuf_set_options", 00:21:01.064 "params": { 00:21:01.064 "small_pool_count": 8192, 00:21:01.064 "large_pool_count": 1024, 00:21:01.064 "small_bufsize": 8192, 00:21:01.064 "large_bufsize": 135168 00:21:01.064 } 00:21:01.064 } 00:21:01.064 ] 00:21:01.064 }, 00:21:01.064 { 00:21:01.064 "subsystem": "sock", 00:21:01.064 "config": [ 00:21:01.064 { 00:21:01.064 "method": "sock_set_default_impl", 00:21:01.064 "params": { 00:21:01.064 "impl_name": "posix" 00:21:01.064 } 00:21:01.064 }, 00:21:01.064 { 00:21:01.064 "method": "sock_impl_set_options", 00:21:01.064 "params": { 00:21:01.064 "impl_name": "ssl", 00:21:01.064 "recv_buf_size": 4096, 00:21:01.064 "send_buf_size": 4096, 00:21:01.064 "enable_recv_pipe": true, 00:21:01.064 "enable_quickack": false, 00:21:01.064 "enable_placement_id": 0, 00:21:01.064 "enable_zerocopy_send_server": true, 00:21:01.064 "enable_zerocopy_send_client": false, 00:21:01.064 "zerocopy_threshold": 0, 00:21:01.064 "tls_version": 0, 00:21:01.064 "enable_ktls": false 00:21:01.064 } 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "method": "sock_impl_set_options", 00:21:01.065 "params": { 00:21:01.065 "impl_name": "posix", 00:21:01.065 "recv_buf_size": 2097152, 00:21:01.065 "send_buf_size": 2097152, 00:21:01.065 "enable_recv_pipe": true, 00:21:01.065 "enable_quickack": false, 00:21:01.065 "enable_placement_id": 0, 00:21:01.065 "enable_zerocopy_send_server": true, 00:21:01.065 "enable_zerocopy_send_client": false, 00:21:01.065 "zerocopy_threshold": 0, 00:21:01.065 "tls_version": 0, 00:21:01.065 "enable_ktls": false 00:21:01.065 } 00:21:01.065 } 00:21:01.065 ] 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "subsystem": "vmd", 00:21:01.065 "config": [] 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "subsystem": "accel", 00:21:01.065 "config": [ 00:21:01.065 { 00:21:01.065 "method": "accel_set_options", 00:21:01.065 "params": { 00:21:01.065 "small_cache_size": 128, 00:21:01.065 "large_cache_size": 16, 00:21:01.065 "task_count": 2048, 00:21:01.065 "sequence_count": 2048, 00:21:01.065 "buf_count": 2048 00:21:01.065 } 00:21:01.065 } 00:21:01.065 ] 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "subsystem": "bdev", 00:21:01.065 "config": [ 00:21:01.065 { 00:21:01.065 "method": "bdev_set_options", 00:21:01.065 "params": { 00:21:01.065 "bdev_io_pool_size": 65535, 00:21:01.065 "bdev_io_cache_size": 256, 00:21:01.065 "bdev_auto_examine": true, 00:21:01.065 "iobuf_small_cache_size": 128, 00:21:01.065 "iobuf_large_cache_size": 16 00:21:01.065 } 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "method": "bdev_raid_set_options", 00:21:01.065 "params": { 00:21:01.065 "process_window_size_kb": 1024 00:21:01.065 } 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "method": "bdev_iscsi_set_options", 00:21:01.065 "params": { 00:21:01.065 "timeout_sec": 30 00:21:01.065 } 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "method": "bdev_nvme_set_options", 00:21:01.065 "params": { 00:21:01.065 "action_on_timeout": "none", 00:21:01.065 "timeout_us": 0, 00:21:01.065 "timeout_admin_us": 0, 00:21:01.065 "keep_alive_timeout_ms": 10000, 00:21:01.065 "arbitration_burst": 0, 00:21:01.065 "low_priority_weight": 0, 00:21:01.065 "medium_priority_weight": 0, 00:21:01.065 "high_priority_weight": 0, 00:21:01.065 "nvme_adminq_poll_period_us": 10000, 00:21:01.065 "nvme_ioq_poll_period_us": 0, 00:21:01.065 "io_queue_requests": 0, 00:21:01.065 "delay_cmd_submit": true, 00:21:01.065 "transport_retry_count": 4, 00:21:01.065 "bdev_retry_count": 3, 00:21:01.065 "transport_ack_timeout": 0, 00:21:01.065 "ctrlr_loss_timeout_sec": 0, 00:21:01.065 "reconnect_delay_sec": 0, 00:21:01.065 "fast_io_fail_timeout_sec": 0, 00:21:01.065 "disable_auto_failback": false, 00:21:01.065 "generate_uuids": false, 00:21:01.065 "transport_tos": 0, 00:21:01.065 "nvme_error_stat": false, 00:21:01.065 "rdma_srq_size": 0, 00:21:01.065 "io_path_stat": false, 00:21:01.065 "allow_accel_sequence": false, 00:21:01.065 "rdma_max_cq_size": 0, 00:21:01.065 "rdma_cm_event_timeout_ms": 0, 00:21:01.065 "dhchap_digests": [ 00:21:01.065 "sha256", 00:21:01.065 "sha384", 00:21:01.065 "sha512" 00:21:01.065 ], 00:21:01.065 "dhchap_dhgroups": [ 00:21:01.065 "null", 00:21:01.065 "ffdhe2048", 00:21:01.065 "ffdhe3072", 00:21:01.065 "ffdhe4096", 00:21:01.065 "ffdhe6144", 00:21:01.065 "ffdhe8192" 00:21:01.065 ] 00:21:01.065 } 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "method": "bdev_nvme_set_hotplug", 00:21:01.065 "params": { 00:21:01.065 "period_us": 100000, 00:21:01.065 "enable": false 00:21:01.065 } 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "method": "bdev_malloc_create", 00:21:01.065 "params": { 00:21:01.065 "name": "malloc0", 00:21:01.065 "num_blocks": 8192, 00:21:01.065 "block_size": 4096, 00:21:01.065 "physical_block_size": 4096, 00:21:01.065 "uuid": "713a0106-1d59-444d-99b8-64141bbcf762", 00:21:01.065 "optimal_io_boundary": 0 00:21:01.065 } 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "method": "bdev_wait_for_examine" 00:21:01.065 } 00:21:01.065 ] 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "subsystem": "nbd", 00:21:01.065 "config": [] 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "subsystem": "scheduler", 00:21:01.065 "config": [ 00:21:01.065 { 00:21:01.065 "method": "framework_set_scheduler", 00:21:01.065 "params": { 00:21:01.065 "name": "static" 00:21:01.065 } 00:21:01.065 } 00:21:01.065 ] 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "subsystem": "nvmf", 00:21:01.065 "config": [ 00:21:01.065 { 00:21:01.065 "method": "nvmf_set_config", 00:21:01.065 "params": { 00:21:01.065 "discovery_filter": "match_any", 00:21:01.065 "admin_cmd_passthru": { 00:21:01.065 "identify_ctrlr": false 00:21:01.065 } 00:21:01.065 } 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "method": "nvmf_set_max_subsystems", 00:21:01.065 "params": { 00:21:01.065 "max_subsystems": 1024 00:21:01.065 } 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "method": "nvmf_set_crdt", 00:21:01.065 "params": { 00:21:01.065 "crdt1": 0, 00:21:01.065 "crdt2": 0, 00:21:01.065 "crdt3": 0 00:21:01.065 } 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "method": "nvmf_create_transport", 00:21:01.065 "params": { 00:21:01.065 "trtype": "TCP", 00:21:01.065 "max_queue_depth": 128, 00:21:01.065 "max_io_qpairs_per_ctrlr": 127, 00:21:01.065 "in_capsule_data_size": 4096, 00:21:01.065 "max_io_size": 131072, 00:21:01.065 "io_unit_size": 131072, 00:21:01.065 "max_aq_depth": 128, 00:21:01.065 "num_shared_buffers": 511, 00:21:01.065 "buf_cache_size": 4294967295, 00:21:01.065 "dif_insert_or_strip": false, 00:21:01.065 "zcopy": false, 00:21:01.065 "c2h_success": false, 00:21:01.065 "sock_priority": 0, 00:21:01.065 "abort_timeout_sec": 1, 00:21:01.065 "ack_timeout": 0, 00:21:01.065 "data_wr_pool_size": 0 00:21:01.065 } 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "method": "nvmf_create_subsystem", 00:21:01.065 "params": { 00:21:01.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.065 "allow_any_host": false, 00:21:01.065 "serial_number": "SPDK00000000000001", 00:21:01.065 "model_number": "SPDK bdev Controller", 00:21:01.065 "max_namespaces": 10, 00:21:01.065 "min_cntlid": 1, 00:21:01.065 "max_cntlid": 65519, 00:21:01.065 "ana_reporting": false 00:21:01.065 } 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "method": "nvmf_subsystem_add_host", 00:21:01.065 "params": { 00:21:01.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.065 "host": "nqn.2016-06.io.spdk:host1", 00:21:01.065 "psk": "/tmp/tmp.WBGxBNxgOa" 00:21:01.065 } 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "method": "nvmf_subsystem_add_ns", 00:21:01.065 "params": { 00:21:01.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.065 "namespace": { 00:21:01.065 "nsid": 1, 00:21:01.065 "bdev_name": "malloc0", 00:21:01.065 "nguid": "713A01061D59444D99B864141BBCF762", 00:21:01.065 "uuid": "713a0106-1d59-444d-99b8-64141bbcf762", 00:21:01.065 "no_auto_visible": false 00:21:01.065 } 00:21:01.065 } 00:21:01.065 }, 00:21:01.065 { 00:21:01.065 "method": "nvmf_subsystem_add_listener", 00:21:01.065 "params": { 00:21:01.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.065 "listen_address": { 00:21:01.065 "trtype": "TCP", 00:21:01.065 "adrfam": "IPv4", 00:21:01.065 "traddr": "10.0.0.2", 00:21:01.065 "trsvcid": "4420" 00:21:01.065 }, 00:21:01.065 "secure_channel": true 00:21:01.065 } 00:21:01.065 } 00:21:01.065 ] 00:21:01.065 } 00:21:01.065 ] 00:21:01.065 }' 00:21:01.066 16:12:36 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:01.327 16:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:01.327 "subsystems": [ 00:21:01.327 { 00:21:01.327 "subsystem": "keyring", 00:21:01.327 "config": [] 00:21:01.327 }, 00:21:01.327 { 00:21:01.327 "subsystem": "iobuf", 00:21:01.327 "config": [ 00:21:01.327 { 00:21:01.327 "method": "iobuf_set_options", 00:21:01.327 "params": { 00:21:01.327 "small_pool_count": 8192, 00:21:01.327 "large_pool_count": 1024, 00:21:01.327 "small_bufsize": 8192, 00:21:01.327 "large_bufsize": 135168 00:21:01.327 } 00:21:01.327 } 00:21:01.327 ] 00:21:01.327 }, 00:21:01.327 { 00:21:01.327 "subsystem": "sock", 00:21:01.327 "config": [ 00:21:01.327 { 00:21:01.327 "method": "sock_set_default_impl", 00:21:01.327 "params": { 00:21:01.327 "impl_name": "posix" 00:21:01.327 } 00:21:01.327 }, 00:21:01.327 { 00:21:01.327 "method": "sock_impl_set_options", 00:21:01.327 "params": { 00:21:01.327 "impl_name": "ssl", 00:21:01.327 "recv_buf_size": 4096, 00:21:01.327 "send_buf_size": 4096, 00:21:01.327 "enable_recv_pipe": true, 00:21:01.327 "enable_quickack": false, 00:21:01.327 "enable_placement_id": 0, 00:21:01.327 "enable_zerocopy_send_server": true, 00:21:01.327 "enable_zerocopy_send_client": false, 00:21:01.327 "zerocopy_threshold": 0, 00:21:01.327 "tls_version": 0, 00:21:01.327 "enable_ktls": false 00:21:01.327 } 00:21:01.327 }, 00:21:01.327 { 00:21:01.327 "method": "sock_impl_set_options", 00:21:01.327 "params": { 00:21:01.327 "impl_name": "posix", 00:21:01.327 "recv_buf_size": 2097152, 00:21:01.327 "send_buf_size": 2097152, 00:21:01.327 "enable_recv_pipe": true, 00:21:01.327 "enable_quickack": false, 00:21:01.327 "enable_placement_id": 0, 00:21:01.327 "enable_zerocopy_send_server": true, 00:21:01.327 "enable_zerocopy_send_client": false, 00:21:01.327 "zerocopy_threshold": 0, 00:21:01.327 "tls_version": 0, 00:21:01.327 "enable_ktls": false 00:21:01.327 } 00:21:01.327 } 00:21:01.327 ] 00:21:01.327 }, 00:21:01.327 { 00:21:01.327 "subsystem": "vmd", 00:21:01.327 "config": [] 00:21:01.327 }, 00:21:01.327 { 00:21:01.327 "subsystem": "accel", 00:21:01.327 "config": [ 00:21:01.327 { 00:21:01.327 "method": "accel_set_options", 00:21:01.327 "params": { 00:21:01.327 "small_cache_size": 128, 00:21:01.327 "large_cache_size": 16, 00:21:01.327 "task_count": 2048, 00:21:01.327 "sequence_count": 2048, 00:21:01.327 "buf_count": 2048 00:21:01.327 } 00:21:01.327 } 00:21:01.327 ] 00:21:01.327 }, 00:21:01.327 { 00:21:01.327 "subsystem": "bdev", 00:21:01.327 "config": [ 00:21:01.327 { 00:21:01.327 "method": "bdev_set_options", 00:21:01.327 "params": { 00:21:01.327 "bdev_io_pool_size": 65535, 00:21:01.327 "bdev_io_cache_size": 256, 00:21:01.327 "bdev_auto_examine": true, 00:21:01.327 "iobuf_small_cache_size": 128, 00:21:01.327 "iobuf_large_cache_size": 16 00:21:01.327 } 00:21:01.327 }, 00:21:01.327 { 00:21:01.327 "method": "bdev_raid_set_options", 00:21:01.327 "params": { 00:21:01.327 "process_window_size_kb": 1024 00:21:01.327 } 00:21:01.327 }, 00:21:01.327 { 00:21:01.327 "method": "bdev_iscsi_set_options", 00:21:01.327 "params": { 00:21:01.327 "timeout_sec": 30 00:21:01.327 } 00:21:01.327 }, 00:21:01.327 { 00:21:01.327 "method": "bdev_nvme_set_options", 00:21:01.327 "params": { 00:21:01.327 "action_on_timeout": "none", 00:21:01.327 "timeout_us": 0, 00:21:01.327 "timeout_admin_us": 0, 00:21:01.327 "keep_alive_timeout_ms": 10000, 00:21:01.327 "arbitration_burst": 0, 00:21:01.327 "low_priority_weight": 0, 00:21:01.327 "medium_priority_weight": 0, 00:21:01.327 "high_priority_weight": 0, 00:21:01.327 "nvme_adminq_poll_period_us": 10000, 00:21:01.327 "nvme_ioq_poll_period_us": 0, 00:21:01.327 "io_queue_requests": 512, 00:21:01.327 "delay_cmd_submit": true, 00:21:01.327 "transport_retry_count": 4, 00:21:01.327 "bdev_retry_count": 3, 00:21:01.327 "transport_ack_timeout": 0, 00:21:01.327 "ctrlr_loss_timeout_sec": 0, 00:21:01.327 "reconnect_delay_sec": 0, 00:21:01.327 "fast_io_fail_timeout_sec": 0, 00:21:01.327 "disable_auto_failback": false, 00:21:01.327 "generate_uuids": false, 00:21:01.327 "transport_tos": 0, 00:21:01.327 "nvme_error_stat": false, 00:21:01.327 "rdma_srq_size": 0, 00:21:01.327 "io_path_stat": false, 00:21:01.327 "allow_accel_sequence": false, 00:21:01.327 "rdma_max_cq_size": 0, 00:21:01.327 "rdma_cm_event_timeout_ms": 0, 00:21:01.327 "dhchap_digests": [ 00:21:01.327 "sha256", 00:21:01.327 "sha384", 00:21:01.327 "sha512" 00:21:01.327 ], 00:21:01.327 "dhchap_dhgroups": [ 00:21:01.327 "null", 00:21:01.327 "ffdhe2048", 00:21:01.327 "ffdhe3072", 00:21:01.327 "ffdhe4096", 00:21:01.327 "ffdhe6144", 00:21:01.327 "ffdhe8192" 00:21:01.327 ] 00:21:01.327 } 00:21:01.327 }, 00:21:01.327 { 00:21:01.327 "method": "bdev_nvme_attach_controller", 00:21:01.327 "params": { 00:21:01.327 "name": "TLSTEST", 00:21:01.327 "trtype": "TCP", 00:21:01.327 "adrfam": "IPv4", 00:21:01.327 "traddr": "10.0.0.2", 00:21:01.327 "trsvcid": "4420", 00:21:01.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.327 "prchk_reftag": false, 00:21:01.327 "prchk_guard": false, 00:21:01.327 "ctrlr_loss_timeout_sec": 0, 00:21:01.327 "reconnect_delay_sec": 0, 00:21:01.327 "fast_io_fail_timeout_sec": 0, 00:21:01.327 "psk": "/tmp/tmp.WBGxBNxgOa", 00:21:01.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.327 "hdgst": false, 00:21:01.327 "ddgst": false 00:21:01.327 } 00:21:01.327 }, 00:21:01.327 { 00:21:01.327 "method": "bdev_nvme_set_hotplug", 00:21:01.327 "params": { 00:21:01.327 "period_us": 100000, 00:21:01.327 "enable": false 00:21:01.327 } 00:21:01.327 }, 00:21:01.327 { 00:21:01.327 "method": "bdev_wait_for_examine" 00:21:01.327 } 00:21:01.327 ] 00:21:01.327 }, 00:21:01.327 { 00:21:01.327 "subsystem": "nbd", 00:21:01.327 "config": [] 00:21:01.327 } 00:21:01.327 ] 00:21:01.327 }' 00:21:01.327 16:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2327705 00:21:01.327 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2327705 ']' 00:21:01.327 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2327705 00:21:01.327 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:01.327 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:01.327 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2327705 00:21:01.327 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:01.327 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:01.327 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2327705' 00:21:01.327 killing process with pid 2327705 00:21:01.327 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2327705 00:21:01.327 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.327 00:21:01.327 Latency(us) 00:21:01.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.327 =================================================================================================================== 00:21:01.327 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:01.327 [2024-07-15 16:12:37.167944] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:01.327 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2327705 00:21:01.588 16:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2327322 00:21:01.588 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2327322 ']' 00:21:01.588 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2327322 00:21:01.588 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:01.588 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:01.588 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2327322 00:21:01.588 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:01.588 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:01.588 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2327322' 00:21:01.588 killing process with pid 2327322 00:21:01.588 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2327322 00:21:01.588 [2024-07-15 16:12:37.333042] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:01.588 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2327322 00:21:01.849 16:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:01.849 16:12:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:01.849 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:01.849 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.849 16:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:01.849 "subsystems": [ 00:21:01.849 { 00:21:01.849 "subsystem": "keyring", 00:21:01.849 "config": [] 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "subsystem": "iobuf", 00:21:01.849 "config": [ 00:21:01.849 { 00:21:01.849 "method": "iobuf_set_options", 00:21:01.849 "params": { 00:21:01.849 "small_pool_count": 8192, 00:21:01.849 "large_pool_count": 1024, 00:21:01.849 "small_bufsize": 8192, 00:21:01.849 "large_bufsize": 135168 00:21:01.849 } 00:21:01.849 } 00:21:01.849 ] 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "subsystem": "sock", 00:21:01.849 "config": [ 00:21:01.849 { 00:21:01.849 "method": "sock_set_default_impl", 00:21:01.849 "params": { 00:21:01.849 "impl_name": "posix" 00:21:01.849 } 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "method": "sock_impl_set_options", 00:21:01.849 "params": { 00:21:01.849 "impl_name": "ssl", 00:21:01.849 "recv_buf_size": 4096, 00:21:01.849 "send_buf_size": 4096, 00:21:01.849 "enable_recv_pipe": true, 00:21:01.849 "enable_quickack": false, 00:21:01.849 "enable_placement_id": 0, 00:21:01.849 "enable_zerocopy_send_server": true, 00:21:01.849 "enable_zerocopy_send_client": false, 00:21:01.849 "zerocopy_threshold": 0, 00:21:01.849 "tls_version": 0, 00:21:01.849 "enable_ktls": false 00:21:01.849 } 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "method": "sock_impl_set_options", 00:21:01.849 "params": { 00:21:01.849 "impl_name": "posix", 00:21:01.849 "recv_buf_size": 2097152, 00:21:01.849 "send_buf_size": 2097152, 00:21:01.849 "enable_recv_pipe": true, 00:21:01.849 "enable_quickack": false, 00:21:01.849 "enable_placement_id": 0, 00:21:01.849 "enable_zerocopy_send_server": true, 00:21:01.849 "enable_zerocopy_send_client": false, 00:21:01.849 "zerocopy_threshold": 0, 00:21:01.849 "tls_version": 0, 00:21:01.849 "enable_ktls": false 00:21:01.849 } 00:21:01.849 } 00:21:01.849 ] 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "subsystem": "vmd", 00:21:01.849 "config": [] 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "subsystem": "accel", 00:21:01.849 "config": [ 00:21:01.849 { 00:21:01.849 "method": "accel_set_options", 00:21:01.849 "params": { 00:21:01.849 "small_cache_size": 128, 00:21:01.849 "large_cache_size": 16, 00:21:01.849 "task_count": 2048, 00:21:01.849 "sequence_count": 2048, 00:21:01.849 "buf_count": 2048 00:21:01.849 } 00:21:01.849 } 00:21:01.849 ] 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "subsystem": "bdev", 00:21:01.849 "config": [ 00:21:01.849 { 00:21:01.849 "method": "bdev_set_options", 00:21:01.849 "params": { 00:21:01.849 "bdev_io_pool_size": 65535, 00:21:01.849 "bdev_io_cache_size": 256, 00:21:01.849 "bdev_auto_examine": true, 00:21:01.849 "iobuf_small_cache_size": 128, 00:21:01.849 "iobuf_large_cache_size": 16 00:21:01.849 } 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "method": "bdev_raid_set_options", 00:21:01.849 "params": { 00:21:01.849 "process_window_size_kb": 1024 00:21:01.849 } 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "method": "bdev_iscsi_set_options", 00:21:01.849 "params": { 00:21:01.849 "timeout_sec": 30 00:21:01.849 } 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "method": "bdev_nvme_set_options", 00:21:01.849 "params": { 00:21:01.849 "action_on_timeout": "none", 00:21:01.849 "timeout_us": 0, 00:21:01.849 "timeout_admin_us": 0, 00:21:01.849 "keep_alive_timeout_ms": 10000, 00:21:01.849 "arbitration_burst": 0, 00:21:01.849 "low_priority_weight": 0, 00:21:01.849 "medium_priority_weight": 0, 00:21:01.849 "high_priority_weight": 0, 00:21:01.849 "nvme_adminq_poll_period_us": 10000, 00:21:01.849 "nvme_ioq_poll_period_us": 0, 00:21:01.849 "io_queue_requests": 0, 00:21:01.849 "delay_cmd_submit": true, 00:21:01.849 "transport_retry_count": 4, 00:21:01.849 "bdev_retry_count": 3, 00:21:01.849 "transport_ack_timeout": 0, 00:21:01.849 "ctrlr_loss_timeout_sec": 0, 00:21:01.849 "reconnect_delay_sec": 0, 00:21:01.849 "fast_io_fail_timeout_sec": 0, 00:21:01.849 "disable_auto_failback": false, 00:21:01.849 "generate_uuids": false, 00:21:01.849 "transport_tos": 0, 00:21:01.849 "nvme_error_stat": false, 00:21:01.849 "rdma_srq_size": 0, 00:21:01.849 "io_path_stat": false, 00:21:01.849 "allow_accel_sequence": false, 00:21:01.849 "rdma_max_cq_size": 0, 00:21:01.849 "rdma_cm_event_timeout_ms": 0, 00:21:01.849 "dhchap_digests": [ 00:21:01.849 "sha256", 00:21:01.849 "sha384", 00:21:01.849 "sha512" 00:21:01.849 ], 00:21:01.849 "dhchap_dhgroups": [ 00:21:01.849 "null", 00:21:01.849 "ffdhe2048", 00:21:01.849 "ffdhe3072", 00:21:01.849 "ffdhe4096", 00:21:01.849 "ffdhe6144", 00:21:01.849 "ffdhe8192" 00:21:01.849 ] 00:21:01.849 } 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "method": "bdev_nvme_set_hotplug", 00:21:01.849 "params": { 00:21:01.849 "period_us": 100000, 00:21:01.849 "enable": false 00:21:01.849 } 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "method": "bdev_malloc_create", 00:21:01.849 "params": { 00:21:01.849 "name": "malloc0", 00:21:01.849 "num_blocks": 8192, 00:21:01.849 "block_size": 4096, 00:21:01.849 "physical_block_size": 4096, 00:21:01.849 "uuid": "713a0106-1d59-444d-99b8-64141bbcf762", 00:21:01.849 "optimal_io_boundary": 0 00:21:01.849 } 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "method": "bdev_wait_for_examine" 00:21:01.849 } 00:21:01.849 ] 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "subsystem": "nbd", 00:21:01.849 "config": [] 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "subsystem": "scheduler", 00:21:01.849 "config": [ 00:21:01.849 { 00:21:01.849 "method": "framework_set_scheduler", 00:21:01.849 "params": { 00:21:01.849 "name": "static" 00:21:01.849 } 00:21:01.849 } 00:21:01.849 ] 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "subsystem": "nvmf", 00:21:01.849 "config": [ 00:21:01.849 { 00:21:01.849 "method": "nvmf_set_config", 00:21:01.849 "params": { 00:21:01.849 "discovery_filter": "match_any", 00:21:01.849 "admin_cmd_passthru": { 00:21:01.849 "identify_ctrlr": false 00:21:01.849 } 00:21:01.849 } 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "method": "nvmf_set_max_subsystems", 00:21:01.849 "params": { 00:21:01.849 "max_subsystems": 1024 00:21:01.849 } 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "method": "nvmf_set_crdt", 00:21:01.849 "params": { 00:21:01.849 "crdt1": 0, 00:21:01.849 "crdt2": 0, 00:21:01.849 "crdt3": 0 00:21:01.849 } 00:21:01.849 }, 00:21:01.849 { 00:21:01.849 "method": "nvmf_create_transport", 00:21:01.849 "params": { 00:21:01.849 "trtype": "TCP", 00:21:01.849 "max_queue_depth": 128, 00:21:01.849 "max_io_qpairs_per_ctrlr": 127, 00:21:01.849 "in_capsule_data_size": 4096, 00:21:01.849 "max_io_size": 131072, 00:21:01.849 "io_unit_size": 131072, 00:21:01.849 "max_aq_depth": 128, 00:21:01.849 "num_shared_buffers": 511, 00:21:01.849 "buf_cache_size": 4294967295, 00:21:01.850 "dif_insert_or_strip": false, 00:21:01.850 "zcopy": false, 00:21:01.850 "c2h_success": false, 00:21:01.850 "sock_priority": 0, 00:21:01.850 "abort_timeout_sec": 1, 00:21:01.850 "ack_timeout": 0, 00:21:01.850 "data_wr_pool_size": 0 00:21:01.850 } 00:21:01.850 }, 00:21:01.850 { 00:21:01.850 "method": "nvmf_create_subsystem", 00:21:01.850 "params": { 00:21:01.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.850 "allow_any_host": false, 00:21:01.850 "serial_number": "SPDK00000000000001", 00:21:01.850 "model_number": "SPDK bdev Controller", 00:21:01.850 "max_namespaces": 10, 00:21:01.850 "min_cntlid": 1, 00:21:01.850 "max_cntlid": 65519, 00:21:01.850 "ana_reporting": false 00:21:01.850 } 00:21:01.850 }, 00:21:01.850 { 00:21:01.850 "method": "nvmf_subsystem_add_host", 00:21:01.850 "params": { 00:21:01.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.850 "host": "nqn.2016-06.io.spdk:host1", 00:21:01.850 "psk": "/tmp/tmp.WBGxBNxgOa" 00:21:01.850 } 00:21:01.850 }, 00:21:01.850 { 00:21:01.850 "method": "nvmf_subsystem_add_ns", 00:21:01.850 "params": { 00:21:01.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.850 "namespace": { 00:21:01.850 "nsid": 1, 00:21:01.850 "bdev_name": "malloc0", 00:21:01.850 "nguid": "713A01061D59444D99B864141BBCF762", 00:21:01.850 "uuid": "713a0106-1d59-444d-99b8-64141bbcf762", 00:21:01.850 "no_auto_visible": false 00:21:01.850 } 00:21:01.850 } 00:21:01.850 }, 00:21:01.850 { 00:21:01.850 "method": "nvmf_subsystem_add_listener", 00:21:01.850 "params": { 00:21:01.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.850 "listen_address": { 00:21:01.850 "trtype": "TCP", 00:21:01.850 "adrfam": "IPv4", 00:21:01.850 "traddr": "10.0.0.2", 00:21:01.850 "trsvcid": "4420" 00:21:01.850 }, 00:21:01.850 "secure_channel": true 00:21:01.850 } 00:21:01.850 } 00:21:01.850 ] 00:21:01.850 } 00:21:01.850 ] 00:21:01.850 }' 00:21:01.850 16:12:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2328005 00:21:01.850 16:12:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2328005 00:21:01.850 16:12:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:01.850 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2328005 ']' 00:21:01.850 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.850 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.850 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.850 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.850 16:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.850 [2024-07-15 16:12:37.509784] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:21:01.850 [2024-07-15 16:12:37.509837] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.850 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.850 [2024-07-15 16:12:37.593325] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.850 [2024-07-15 16:12:37.645534] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.850 [2024-07-15 16:12:37.645566] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.850 [2024-07-15 16:12:37.645572] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.850 [2024-07-15 16:12:37.645576] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.850 [2024-07-15 16:12:37.645580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.850 [2024-07-15 16:12:37.645627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.110 [2024-07-15 16:12:37.828749] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.110 [2024-07-15 16:12:37.844725] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:02.110 [2024-07-15 16:12:37.860767] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:02.110 [2024-07-15 16:12:37.869268] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.680 16:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.680 16:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:02.680 16:12:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.680 16:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:02.680 16:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.680 16:12:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.680 16:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2328279 00:21:02.680 16:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2328279 /var/tmp/bdevperf.sock 00:21:02.680 16:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2328279 ']' 00:21:02.680 16:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.680 16:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.680 16:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.681 16:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:02.681 16:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.681 16:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.681 16:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:02.681 "subsystems": [ 00:21:02.681 { 00:21:02.681 "subsystem": "keyring", 00:21:02.681 "config": [] 00:21:02.681 }, 00:21:02.681 { 00:21:02.681 "subsystem": "iobuf", 00:21:02.681 "config": [ 00:21:02.681 { 00:21:02.681 "method": "iobuf_set_options", 00:21:02.681 "params": { 00:21:02.681 "small_pool_count": 8192, 00:21:02.681 "large_pool_count": 1024, 00:21:02.681 "small_bufsize": 8192, 00:21:02.681 "large_bufsize": 135168 00:21:02.681 } 00:21:02.681 } 00:21:02.681 ] 00:21:02.681 }, 00:21:02.681 { 00:21:02.681 "subsystem": "sock", 00:21:02.681 "config": [ 00:21:02.681 { 00:21:02.681 "method": "sock_set_default_impl", 00:21:02.681 "params": { 00:21:02.681 "impl_name": "posix" 00:21:02.681 } 00:21:02.681 }, 00:21:02.681 { 00:21:02.681 "method": "sock_impl_set_options", 00:21:02.681 "params": { 00:21:02.681 "impl_name": "ssl", 00:21:02.681 "recv_buf_size": 4096, 00:21:02.681 "send_buf_size": 4096, 00:21:02.681 "enable_recv_pipe": true, 00:21:02.681 "enable_quickack": false, 00:21:02.681 "enable_placement_id": 0, 00:21:02.681 "enable_zerocopy_send_server": true, 00:21:02.681 "enable_zerocopy_send_client": false, 00:21:02.681 "zerocopy_threshold": 0, 00:21:02.681 "tls_version": 0, 00:21:02.681 "enable_ktls": false 00:21:02.681 } 00:21:02.681 }, 00:21:02.681 { 00:21:02.681 "method": "sock_impl_set_options", 00:21:02.681 "params": { 00:21:02.681 "impl_name": "posix", 00:21:02.681 "recv_buf_size": 2097152, 00:21:02.681 "send_buf_size": 2097152, 00:21:02.681 "enable_recv_pipe": true, 00:21:02.681 "enable_quickack": false, 00:21:02.681 "enable_placement_id": 0, 00:21:02.681 "enable_zerocopy_send_server": true, 00:21:02.681 "enable_zerocopy_send_client": false, 00:21:02.681 "zerocopy_threshold": 0, 00:21:02.681 "tls_version": 0, 00:21:02.681 "enable_ktls": false 00:21:02.681 } 00:21:02.681 } 00:21:02.681 ] 00:21:02.681 }, 00:21:02.681 { 00:21:02.681 "subsystem": "vmd", 00:21:02.681 "config": [] 00:21:02.681 }, 00:21:02.681 { 00:21:02.681 "subsystem": "accel", 00:21:02.681 "config": [ 00:21:02.681 { 00:21:02.681 "method": "accel_set_options", 00:21:02.681 "params": { 00:21:02.681 "small_cache_size": 128, 00:21:02.681 "large_cache_size": 16, 00:21:02.681 "task_count": 2048, 00:21:02.681 "sequence_count": 2048, 00:21:02.681 "buf_count": 2048 00:21:02.681 } 00:21:02.681 } 00:21:02.681 ] 00:21:02.681 }, 00:21:02.681 { 00:21:02.681 "subsystem": "bdev", 00:21:02.681 "config": [ 00:21:02.681 { 00:21:02.681 "method": "bdev_set_options", 00:21:02.681 "params": { 00:21:02.681 "bdev_io_pool_size": 65535, 00:21:02.681 "bdev_io_cache_size": 256, 00:21:02.681 "bdev_auto_examine": true, 00:21:02.681 "iobuf_small_cache_size": 128, 00:21:02.681 "iobuf_large_cache_size": 16 00:21:02.681 } 00:21:02.681 }, 00:21:02.681 { 00:21:02.681 "method": "bdev_raid_set_options", 00:21:02.681 "params": { 00:21:02.681 "process_window_size_kb": 1024 00:21:02.681 } 00:21:02.681 }, 00:21:02.681 { 00:21:02.681 "method": "bdev_iscsi_set_options", 00:21:02.681 "params": { 00:21:02.681 "timeout_sec": 30 00:21:02.681 } 00:21:02.681 }, 00:21:02.681 { 00:21:02.681 "method": "bdev_nvme_set_options", 00:21:02.681 "params": { 00:21:02.681 "action_on_timeout": "none", 00:21:02.681 "timeout_us": 0, 00:21:02.681 "timeout_admin_us": 0, 00:21:02.681 "keep_alive_timeout_ms": 10000, 00:21:02.681 "arbitration_burst": 0, 00:21:02.681 "low_priority_weight": 0, 00:21:02.681 "medium_priority_weight": 0, 00:21:02.681 "high_priority_weight": 0, 00:21:02.681 "nvme_adminq_poll_period_us": 10000, 00:21:02.681 "nvme_ioq_poll_period_us": 0, 00:21:02.681 "io_queue_requests": 512, 00:21:02.681 "delay_cmd_submit": true, 00:21:02.681 "transport_retry_count": 4, 00:21:02.681 "bdev_retry_count": 3, 00:21:02.681 "transport_ack_timeout": 0, 00:21:02.681 "ctrlr_loss_timeout_sec": 0, 00:21:02.681 "reconnect_delay_sec": 0, 00:21:02.681 "fast_io_fail_timeout_sec": 0, 00:21:02.681 "disable_auto_failback": false, 00:21:02.681 "generate_uuids": false, 00:21:02.681 "transport_tos": 0, 00:21:02.681 "nvme_error_stat": false, 00:21:02.681 "rdma_srq_size": 0, 00:21:02.681 "io_path_stat": false, 00:21:02.681 "allow_accel_sequence": false, 00:21:02.681 "rdma_max_cq_size": 0, 00:21:02.681 "rdma_cm_event_timeout_ms": 0, 00:21:02.681 "dhchap_digests": [ 00:21:02.681 "sha256", 00:21:02.681 "sha384", 00:21:02.681 "sha512" 00:21:02.681 ], 00:21:02.681 "dhchap_dhgroups": [ 00:21:02.681 "null", 00:21:02.681 "ffdhe2048", 00:21:02.681 "ffdhe3072", 00:21:02.681 "ffdhe4096", 00:21:02.681 "ffdhe6144", 00:21:02.681 "ffdhe8192" 00:21:02.681 ] 00:21:02.681 } 00:21:02.681 }, 00:21:02.681 { 00:21:02.681 "method": "bdev_nvme_attach_controller", 00:21:02.681 "params": { 00:21:02.681 "name": "TLSTEST", 00:21:02.681 "trtype": "TCP", 00:21:02.681 "adrfam": "IPv4", 00:21:02.681 "traddr": "10.0.0.2", 00:21:02.681 "trsvcid": "4420", 00:21:02.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.681 "prchk_reftag": false, 00:21:02.681 "prchk_guard": false, 00:21:02.681 "ctrlr_loss_timeout_sec": 0, 00:21:02.681 "reconnect_delay_sec": 0, 00:21:02.681 "fast_io_fail_timeout_sec": 0, 00:21:02.681 "psk": "/tmp/tmp.WBGxBNxgOa", 00:21:02.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.681 "hdgst": false, 00:21:02.681 "ddgst": false 00:21:02.681 } 00:21:02.681 }, 00:21:02.681 { 00:21:02.681 "method": "bdev_nvme_set_hotplug", 00:21:02.681 "params": { 00:21:02.681 "period_us": 100000, 00:21:02.681 "enable": false 00:21:02.681 } 00:21:02.681 }, 00:21:02.681 { 00:21:02.681 "method": "bdev_wait_for_examine" 00:21:02.681 } 00:21:02.681 ] 00:21:02.681 }, 00:21:02.681 { 00:21:02.681 "subsystem": "nbd", 00:21:02.681 "config": [] 00:21:02.681 } 00:21:02.681 ] 00:21:02.681 }' 00:21:02.681 [2024-07-15 16:12:38.371285] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:21:02.681 [2024-07-15 16:12:38.371335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2328279 ] 00:21:02.681 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.681 [2024-07-15 16:12:38.424482] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.681 [2024-07-15 16:12:38.477251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.967 [2024-07-15 16:12:38.601870] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.967 [2024-07-15 16:12:38.601939] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:03.543 16:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.543 16:12:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:03.543 16:12:39 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:03.543 Running I/O for 10 seconds... 00:21:13.573 00:21:13.573 Latency(us) 00:21:13.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.573 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:13.573 Verification LBA range: start 0x0 length 0x2000 00:21:13.573 TLSTESTn1 : 10.04 2564.63 10.02 0.00 0.00 49819.84 6225.92 125829.12 00:21:13.573 =================================================================================================================== 00:21:13.573 Total : 2564.63 10.02 0.00 0.00 49819.84 6225.92 125829.12 00:21:13.573 0 00:21:13.573 16:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:13.573 16:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2328279 00:21:13.573 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2328279 ']' 00:21:13.573 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2328279 00:21:13.573 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:13.573 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:13.573 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2328279 00:21:13.573 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:13.573 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:13.573 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2328279' 00:21:13.573 killing process with pid 2328279 00:21:13.573 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2328279 00:21:13.573 Received shutdown signal, test time was about 10.000000 seconds 00:21:13.573 00:21:13.573 Latency(us) 00:21:13.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.573 =================================================================================================================== 00:21:13.573 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:13.573 [2024-07-15 16:12:49.375587] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:13.573 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2328279 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2328005 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2328005 ']' 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2328005 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2328005 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2328005' 00:21:13.834 killing process with pid 2328005 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2328005 00:21:13.834 [2024-07-15 16:12:49.542739] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2328005 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2330382 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2330382 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2330382 ']' 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.834 16:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.095 [2024-07-15 16:12:49.695489] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:21:14.095 [2024-07-15 16:12:49.695536] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.095 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.095 [2024-07-15 16:12:49.748583] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.095 [2024-07-15 16:12:49.811196] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.095 [2024-07-15 16:12:49.811233] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.095 [2024-07-15 16:12:49.811240] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.095 [2024-07-15 16:12:49.811247] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.095 [2024-07-15 16:12:49.811252] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.095 [2024-07-15 16:12:49.811271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.667 16:12:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.667 16:12:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:14.667 16:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:14.667 16:12:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:14.667 16:12:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.928 16:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.928 16:12:50 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.WBGxBNxgOa 00:21:14.928 16:12:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WBGxBNxgOa 00:21:14.928 16:12:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:14.928 [2024-07-15 16:12:50.669983] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.928 16:12:50 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:15.188 16:12:50 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:15.188 [2024-07-15 16:12:50.982743] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:15.188 [2024-07-15 16:12:50.982943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.188 16:12:50 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:15.449 malloc0 00:21:15.449 16:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:15.449 16:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WBGxBNxgOa 00:21:15.710 [2024-07-15 16:12:51.418732] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:15.710 16:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2330744 00:21:15.710 16:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:15.710 16:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:15.710 16:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2330744 /var/tmp/bdevperf.sock 00:21:15.710 16:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2330744 ']' 00:21:15.710 16:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.710 16:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.710 16:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.710 16:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.710 16:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.710 [2024-07-15 16:12:51.482405] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:21:15.710 [2024-07-15 16:12:51.482458] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2330744 ] 00:21:15.710 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.971 [2024-07-15 16:12:51.556244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.971 [2024-07-15 16:12:51.610024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.542 16:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.542 16:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:16.542 16:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WBGxBNxgOa 00:21:16.803 16:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:16.803 [2024-07-15 16:12:52.531963] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:16.803 nvme0n1 00:21:16.803 16:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:17.064 Running I/O for 1 seconds... 00:21:18.004 00:21:18.004 Latency(us) 00:21:18.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.004 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:18.004 Verification LBA range: start 0x0 length 0x2000 00:21:18.004 nvme0n1 : 1.07 2292.61 8.96 0.00 0.00 54240.48 5816.32 68594.35 00:21:18.004 =================================================================================================================== 00:21:18.004 Total : 2292.61 8.96 0.00 0.00 54240.48 5816.32 68594.35 00:21:18.004 0 00:21:18.004 16:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2330744 00:21:18.004 16:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2330744 ']' 00:21:18.004 16:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2330744 00:21:18.004 16:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:18.004 16:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.004 16:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2330744 00:21:18.265 16:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:18.265 16:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:18.265 16:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2330744' 00:21:18.265 killing process with pid 2330744 00:21:18.265 16:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2330744 00:21:18.265 Received shutdown signal, test time was about 1.000000 seconds 00:21:18.265 00:21:18.265 Latency(us) 00:21:18.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.265 =================================================================================================================== 00:21:18.265 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.265 16:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2330744 00:21:18.265 16:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2330382 00:21:18.265 16:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2330382 ']' 00:21:18.265 16:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2330382 00:21:18.265 16:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:18.265 16:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.265 16:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2330382 00:21:18.265 16:12:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:18.265 16:12:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:18.265 16:12:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2330382' 00:21:18.265 killing process with pid 2330382 00:21:18.265 16:12:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2330382 00:21:18.265 [2024-07-15 16:12:54.033444] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:18.265 16:12:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2330382 00:21:18.526 16:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:18.526 16:12:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:18.526 16:12:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:18.526 16:12:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.526 16:12:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2331416 00:21:18.526 16:12:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2331416 00:21:18.526 16:12:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:18.526 16:12:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2331416 ']' 00:21:18.526 16:12:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.526 16:12:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:18.526 16:12:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.526 16:12:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:18.526 16:12:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.526 [2024-07-15 16:12:54.229754] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:21:18.526 [2024-07-15 16:12:54.229809] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.526 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.526 [2024-07-15 16:12:54.293579] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.526 [2024-07-15 16:12:54.358179] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.526 [2024-07-15 16:12:54.358213] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.526 [2024-07-15 16:12:54.358220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.526 [2024-07-15 16:12:54.358227] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.526 [2024-07-15 16:12:54.358232] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.526 [2024-07-15 16:12:54.358250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.468 16:12:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:19.468 16:12:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:19.468 16:12:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:19.468 16:12:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:19.468 16:12:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.468 16:12:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.468 16:12:55 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:19.468 16:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.468 16:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.468 [2024-07-15 16:12:55.032481] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.468 malloc0 00:21:19.468 [2024-07-15 16:12:55.059204] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:19.468 [2024-07-15 16:12:55.059405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.468 16:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.469 16:12:55 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2331448 00:21:19.469 16:12:55 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2331448 /var/tmp/bdevperf.sock 00:21:19.469 16:12:55 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:19.469 16:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2331448 ']' 00:21:19.469 16:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.469 16:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:19.469 16:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.469 16:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:19.469 16:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.469 [2024-07-15 16:12:55.137732] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:21:19.469 [2024-07-15 16:12:55.137780] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2331448 ] 00:21:19.469 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.469 [2024-07-15 16:12:55.214066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.469 [2024-07-15 16:12:55.267921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.411 16:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.411 16:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:20.411 16:12:55 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WBGxBNxgOa 00:21:20.411 16:12:56 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:20.411 [2024-07-15 16:12:56.161744] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.411 nvme0n1 00:21:20.672 16:12:56 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:20.672 Running I/O for 1 seconds... 00:21:21.614 00:21:21.614 Latency(us) 00:21:21.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.614 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:21.614 Verification LBA range: start 0x0 length 0x2000 00:21:21.614 nvme0n1 : 1.08 2493.00 9.74 0.00 0.00 49780.37 5597.87 71215.79 00:21:21.614 =================================================================================================================== 00:21:21.614 Total : 2493.00 9.74 0.00 0.00 49780.37 5597.87 71215.79 00:21:21.614 0 00:21:21.614 16:12:57 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:21.614 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.614 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.875 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.875 16:12:57 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:21.875 "subsystems": [ 00:21:21.875 { 00:21:21.875 "subsystem": "keyring", 00:21:21.875 "config": [ 00:21:21.875 { 00:21:21.875 "method": "keyring_file_add_key", 00:21:21.875 "params": { 00:21:21.875 "name": "key0", 00:21:21.875 "path": "/tmp/tmp.WBGxBNxgOa" 00:21:21.875 } 00:21:21.875 } 00:21:21.875 ] 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "subsystem": "iobuf", 00:21:21.875 "config": [ 00:21:21.875 { 00:21:21.875 "method": "iobuf_set_options", 00:21:21.875 "params": { 00:21:21.875 "small_pool_count": 8192, 00:21:21.875 "large_pool_count": 1024, 00:21:21.875 "small_bufsize": 8192, 00:21:21.875 "large_bufsize": 135168 00:21:21.875 } 00:21:21.875 } 00:21:21.875 ] 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "subsystem": "sock", 00:21:21.875 "config": [ 00:21:21.875 { 00:21:21.875 "method": "sock_set_default_impl", 00:21:21.875 "params": { 00:21:21.875 "impl_name": "posix" 00:21:21.875 } 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "method": "sock_impl_set_options", 00:21:21.875 "params": { 00:21:21.875 "impl_name": "ssl", 00:21:21.875 "recv_buf_size": 4096, 00:21:21.875 "send_buf_size": 4096, 00:21:21.875 "enable_recv_pipe": true, 00:21:21.875 "enable_quickack": false, 00:21:21.875 "enable_placement_id": 0, 00:21:21.875 "enable_zerocopy_send_server": true, 00:21:21.875 "enable_zerocopy_send_client": false, 00:21:21.875 "zerocopy_threshold": 0, 00:21:21.875 "tls_version": 0, 00:21:21.875 "enable_ktls": false 00:21:21.875 } 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "method": "sock_impl_set_options", 00:21:21.875 "params": { 00:21:21.875 "impl_name": "posix", 00:21:21.875 "recv_buf_size": 2097152, 00:21:21.875 "send_buf_size": 2097152, 00:21:21.875 "enable_recv_pipe": true, 00:21:21.875 "enable_quickack": false, 00:21:21.875 "enable_placement_id": 0, 00:21:21.875 "enable_zerocopy_send_server": true, 00:21:21.875 "enable_zerocopy_send_client": false, 00:21:21.875 "zerocopy_threshold": 0, 00:21:21.875 "tls_version": 0, 00:21:21.875 "enable_ktls": false 00:21:21.875 } 00:21:21.875 } 00:21:21.875 ] 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "subsystem": "vmd", 00:21:21.875 "config": [] 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "subsystem": "accel", 00:21:21.875 "config": [ 00:21:21.875 { 00:21:21.875 "method": "accel_set_options", 00:21:21.875 "params": { 00:21:21.875 "small_cache_size": 128, 00:21:21.875 "large_cache_size": 16, 00:21:21.875 "task_count": 2048, 00:21:21.875 "sequence_count": 2048, 00:21:21.875 "buf_count": 2048 00:21:21.875 } 00:21:21.875 } 00:21:21.875 ] 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "subsystem": "bdev", 00:21:21.875 "config": [ 00:21:21.875 { 00:21:21.875 "method": "bdev_set_options", 00:21:21.875 "params": { 00:21:21.875 "bdev_io_pool_size": 65535, 00:21:21.875 "bdev_io_cache_size": 256, 00:21:21.875 "bdev_auto_examine": true, 00:21:21.875 "iobuf_small_cache_size": 128, 00:21:21.875 "iobuf_large_cache_size": 16 00:21:21.875 } 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "method": "bdev_raid_set_options", 00:21:21.875 "params": { 00:21:21.875 "process_window_size_kb": 1024 00:21:21.875 } 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "method": "bdev_iscsi_set_options", 00:21:21.875 "params": { 00:21:21.875 "timeout_sec": 30 00:21:21.875 } 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "method": "bdev_nvme_set_options", 00:21:21.875 "params": { 00:21:21.875 "action_on_timeout": "none", 00:21:21.875 "timeout_us": 0, 00:21:21.875 "timeout_admin_us": 0, 00:21:21.875 "keep_alive_timeout_ms": 10000, 00:21:21.875 "arbitration_burst": 0, 00:21:21.875 "low_priority_weight": 0, 00:21:21.875 "medium_priority_weight": 0, 00:21:21.875 "high_priority_weight": 0, 00:21:21.875 "nvme_adminq_poll_period_us": 10000, 00:21:21.875 "nvme_ioq_poll_period_us": 0, 00:21:21.875 "io_queue_requests": 0, 00:21:21.875 "delay_cmd_submit": true, 00:21:21.875 "transport_retry_count": 4, 00:21:21.875 "bdev_retry_count": 3, 00:21:21.875 "transport_ack_timeout": 0, 00:21:21.875 "ctrlr_loss_timeout_sec": 0, 00:21:21.875 "reconnect_delay_sec": 0, 00:21:21.875 "fast_io_fail_timeout_sec": 0, 00:21:21.875 "disable_auto_failback": false, 00:21:21.875 "generate_uuids": false, 00:21:21.875 "transport_tos": 0, 00:21:21.875 "nvme_error_stat": false, 00:21:21.875 "rdma_srq_size": 0, 00:21:21.875 "io_path_stat": false, 00:21:21.875 "allow_accel_sequence": false, 00:21:21.875 "rdma_max_cq_size": 0, 00:21:21.875 "rdma_cm_event_timeout_ms": 0, 00:21:21.875 "dhchap_digests": [ 00:21:21.875 "sha256", 00:21:21.875 "sha384", 00:21:21.875 "sha512" 00:21:21.875 ], 00:21:21.875 "dhchap_dhgroups": [ 00:21:21.875 "null", 00:21:21.875 "ffdhe2048", 00:21:21.875 "ffdhe3072", 00:21:21.875 "ffdhe4096", 00:21:21.875 "ffdhe6144", 00:21:21.875 "ffdhe8192" 00:21:21.875 ] 00:21:21.875 } 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "method": "bdev_nvme_set_hotplug", 00:21:21.875 "params": { 00:21:21.875 "period_us": 100000, 00:21:21.875 "enable": false 00:21:21.875 } 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "method": "bdev_malloc_create", 00:21:21.875 "params": { 00:21:21.875 "name": "malloc0", 00:21:21.875 "num_blocks": 8192, 00:21:21.875 "block_size": 4096, 00:21:21.875 "physical_block_size": 4096, 00:21:21.875 "uuid": "a8b5e1df-bc34-450d-a2d3-476a2a638313", 00:21:21.875 "optimal_io_boundary": 0 00:21:21.875 } 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "method": "bdev_wait_for_examine" 00:21:21.875 } 00:21:21.875 ] 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "subsystem": "nbd", 00:21:21.875 "config": [] 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "subsystem": "scheduler", 00:21:21.875 "config": [ 00:21:21.875 { 00:21:21.875 "method": "framework_set_scheduler", 00:21:21.875 "params": { 00:21:21.875 "name": "static" 00:21:21.875 } 00:21:21.875 } 00:21:21.875 ] 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "subsystem": "nvmf", 00:21:21.875 "config": [ 00:21:21.875 { 00:21:21.875 "method": "nvmf_set_config", 00:21:21.875 "params": { 00:21:21.875 "discovery_filter": "match_any", 00:21:21.875 "admin_cmd_passthru": { 00:21:21.875 "identify_ctrlr": false 00:21:21.875 } 00:21:21.875 } 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "method": "nvmf_set_max_subsystems", 00:21:21.875 "params": { 00:21:21.875 "max_subsystems": 1024 00:21:21.875 } 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "method": "nvmf_set_crdt", 00:21:21.875 "params": { 00:21:21.875 "crdt1": 0, 00:21:21.875 "crdt2": 0, 00:21:21.875 "crdt3": 0 00:21:21.875 } 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "method": "nvmf_create_transport", 00:21:21.875 "params": { 00:21:21.875 "trtype": "TCP", 00:21:21.875 "max_queue_depth": 128, 00:21:21.875 "max_io_qpairs_per_ctrlr": 127, 00:21:21.875 "in_capsule_data_size": 4096, 00:21:21.875 "max_io_size": 131072, 00:21:21.875 "io_unit_size": 131072, 00:21:21.875 "max_aq_depth": 128, 00:21:21.875 "num_shared_buffers": 511, 00:21:21.875 "buf_cache_size": 4294967295, 00:21:21.875 "dif_insert_or_strip": false, 00:21:21.875 "zcopy": false, 00:21:21.875 "c2h_success": false, 00:21:21.875 "sock_priority": 0, 00:21:21.875 "abort_timeout_sec": 1, 00:21:21.875 "ack_timeout": 0, 00:21:21.875 "data_wr_pool_size": 0 00:21:21.875 } 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "method": "nvmf_create_subsystem", 00:21:21.875 "params": { 00:21:21.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.875 "allow_any_host": false, 00:21:21.875 "serial_number": "00000000000000000000", 00:21:21.875 "model_number": "SPDK bdev Controller", 00:21:21.875 "max_namespaces": 32, 00:21:21.875 "min_cntlid": 1, 00:21:21.875 "max_cntlid": 65519, 00:21:21.875 "ana_reporting": false 00:21:21.875 } 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "method": "nvmf_subsystem_add_host", 00:21:21.875 "params": { 00:21:21.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.875 "host": "nqn.2016-06.io.spdk:host1", 00:21:21.875 "psk": "key0" 00:21:21.875 } 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "method": "nvmf_subsystem_add_ns", 00:21:21.875 "params": { 00:21:21.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.875 "namespace": { 00:21:21.875 "nsid": 1, 00:21:21.875 "bdev_name": "malloc0", 00:21:21.875 "nguid": "A8B5E1DFBC34450DA2D3476A2A638313", 00:21:21.875 "uuid": "a8b5e1df-bc34-450d-a2d3-476a2a638313", 00:21:21.875 "no_auto_visible": false 00:21:21.875 } 00:21:21.875 } 00:21:21.875 }, 00:21:21.875 { 00:21:21.875 "method": "nvmf_subsystem_add_listener", 00:21:21.875 "params": { 00:21:21.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.875 "listen_address": { 00:21:21.875 "trtype": "TCP", 00:21:21.875 "adrfam": "IPv4", 00:21:21.875 "traddr": "10.0.0.2", 00:21:21.875 "trsvcid": "4420" 00:21:21.875 }, 00:21:21.875 "secure_channel": true 00:21:21.875 } 00:21:21.875 } 00:21:21.875 ] 00:21:21.875 } 00:21:21.875 ] 00:21:21.875 }' 00:21:21.875 16:12:57 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:22.136 16:12:57 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:22.136 "subsystems": [ 00:21:22.136 { 00:21:22.136 "subsystem": "keyring", 00:21:22.136 "config": [ 00:21:22.136 { 00:21:22.136 "method": "keyring_file_add_key", 00:21:22.136 "params": { 00:21:22.136 "name": "key0", 00:21:22.136 "path": "/tmp/tmp.WBGxBNxgOa" 00:21:22.136 } 00:21:22.136 } 00:21:22.136 ] 00:21:22.136 }, 00:21:22.136 { 00:21:22.136 "subsystem": "iobuf", 00:21:22.136 "config": [ 00:21:22.136 { 00:21:22.136 "method": "iobuf_set_options", 00:21:22.136 "params": { 00:21:22.136 "small_pool_count": 8192, 00:21:22.136 "large_pool_count": 1024, 00:21:22.136 "small_bufsize": 8192, 00:21:22.136 "large_bufsize": 135168 00:21:22.136 } 00:21:22.136 } 00:21:22.136 ] 00:21:22.136 }, 00:21:22.136 { 00:21:22.136 "subsystem": "sock", 00:21:22.136 "config": [ 00:21:22.136 { 00:21:22.136 "method": "sock_set_default_impl", 00:21:22.136 "params": { 00:21:22.136 "impl_name": "posix" 00:21:22.136 } 00:21:22.136 }, 00:21:22.136 { 00:21:22.136 "method": "sock_impl_set_options", 00:21:22.136 "params": { 00:21:22.136 "impl_name": "ssl", 00:21:22.136 "recv_buf_size": 4096, 00:21:22.136 "send_buf_size": 4096, 00:21:22.136 "enable_recv_pipe": true, 00:21:22.136 "enable_quickack": false, 00:21:22.136 "enable_placement_id": 0, 00:21:22.136 "enable_zerocopy_send_server": true, 00:21:22.136 "enable_zerocopy_send_client": false, 00:21:22.136 "zerocopy_threshold": 0, 00:21:22.136 "tls_version": 0, 00:21:22.136 "enable_ktls": false 00:21:22.136 } 00:21:22.136 }, 00:21:22.136 { 00:21:22.136 "method": "sock_impl_set_options", 00:21:22.136 "params": { 00:21:22.136 "impl_name": "posix", 00:21:22.136 "recv_buf_size": 2097152, 00:21:22.136 "send_buf_size": 2097152, 00:21:22.136 "enable_recv_pipe": true, 00:21:22.136 "enable_quickack": false, 00:21:22.136 "enable_placement_id": 0, 00:21:22.136 "enable_zerocopy_send_server": true, 00:21:22.136 "enable_zerocopy_send_client": false, 00:21:22.136 "zerocopy_threshold": 0, 00:21:22.136 "tls_version": 0, 00:21:22.136 "enable_ktls": false 00:21:22.136 } 00:21:22.136 } 00:21:22.136 ] 00:21:22.136 }, 00:21:22.136 { 00:21:22.136 "subsystem": "vmd", 00:21:22.136 "config": [] 00:21:22.136 }, 00:21:22.136 { 00:21:22.136 "subsystem": "accel", 00:21:22.136 "config": [ 00:21:22.136 { 00:21:22.136 "method": "accel_set_options", 00:21:22.136 "params": { 00:21:22.136 "small_cache_size": 128, 00:21:22.136 "large_cache_size": 16, 00:21:22.136 "task_count": 2048, 00:21:22.136 "sequence_count": 2048, 00:21:22.136 "buf_count": 2048 00:21:22.136 } 00:21:22.136 } 00:21:22.136 ] 00:21:22.136 }, 00:21:22.136 { 00:21:22.136 "subsystem": "bdev", 00:21:22.136 "config": [ 00:21:22.136 { 00:21:22.136 "method": "bdev_set_options", 00:21:22.136 "params": { 00:21:22.136 "bdev_io_pool_size": 65535, 00:21:22.136 "bdev_io_cache_size": 256, 00:21:22.136 "bdev_auto_examine": true, 00:21:22.136 "iobuf_small_cache_size": 128, 00:21:22.136 "iobuf_large_cache_size": 16 00:21:22.136 } 00:21:22.136 }, 00:21:22.136 { 00:21:22.136 "method": "bdev_raid_set_options", 00:21:22.136 "params": { 00:21:22.136 "process_window_size_kb": 1024 00:21:22.136 } 00:21:22.136 }, 00:21:22.136 { 00:21:22.136 "method": "bdev_iscsi_set_options", 00:21:22.136 "params": { 00:21:22.136 "timeout_sec": 30 00:21:22.136 } 00:21:22.136 }, 00:21:22.136 { 00:21:22.136 "method": "bdev_nvme_set_options", 00:21:22.136 "params": { 00:21:22.136 "action_on_timeout": "none", 00:21:22.136 "timeout_us": 0, 00:21:22.136 "timeout_admin_us": 0, 00:21:22.136 "keep_alive_timeout_ms": 10000, 00:21:22.136 "arbitration_burst": 0, 00:21:22.136 "low_priority_weight": 0, 00:21:22.136 "medium_priority_weight": 0, 00:21:22.136 "high_priority_weight": 0, 00:21:22.136 "nvme_adminq_poll_period_us": 10000, 00:21:22.136 "nvme_ioq_poll_period_us": 0, 00:21:22.136 "io_queue_requests": 512, 00:21:22.136 "delay_cmd_submit": true, 00:21:22.136 "transport_retry_count": 4, 00:21:22.136 "bdev_retry_count": 3, 00:21:22.136 "transport_ack_timeout": 0, 00:21:22.136 "ctrlr_loss_timeout_sec": 0, 00:21:22.136 "reconnect_delay_sec": 0, 00:21:22.136 "fast_io_fail_timeout_sec": 0, 00:21:22.136 "disable_auto_failback": false, 00:21:22.136 "generate_uuids": false, 00:21:22.136 "transport_tos": 0, 00:21:22.136 "nvme_error_stat": false, 00:21:22.136 "rdma_srq_size": 0, 00:21:22.136 "io_path_stat": false, 00:21:22.136 "allow_accel_sequence": false, 00:21:22.136 "rdma_max_cq_size": 0, 00:21:22.136 "rdma_cm_event_timeout_ms": 0, 00:21:22.136 "dhchap_digests": [ 00:21:22.136 "sha256", 00:21:22.136 "sha384", 00:21:22.136 "sha512" 00:21:22.136 ], 00:21:22.136 "dhchap_dhgroups": [ 00:21:22.136 "null", 00:21:22.136 "ffdhe2048", 00:21:22.136 "ffdhe3072", 00:21:22.136 "ffdhe4096", 00:21:22.136 "ffdhe6144", 00:21:22.136 "ffdhe8192" 00:21:22.136 ] 00:21:22.136 } 00:21:22.136 }, 00:21:22.136 { 00:21:22.136 "method": "bdev_nvme_attach_controller", 00:21:22.136 "params": { 00:21:22.136 "name": "nvme0", 00:21:22.136 "trtype": "TCP", 00:21:22.137 "adrfam": "IPv4", 00:21:22.137 "traddr": "10.0.0.2", 00:21:22.137 "trsvcid": "4420", 00:21:22.137 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.137 "prchk_reftag": false, 00:21:22.137 "prchk_guard": false, 00:21:22.137 "ctrlr_loss_timeout_sec": 0, 00:21:22.137 "reconnect_delay_sec": 0, 00:21:22.137 "fast_io_fail_timeout_sec": 0, 00:21:22.137 "psk": "key0", 00:21:22.137 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.137 "hdgst": false, 00:21:22.137 "ddgst": false 00:21:22.137 } 00:21:22.137 }, 00:21:22.137 { 00:21:22.137 "method": "bdev_nvme_set_hotplug", 00:21:22.137 "params": { 00:21:22.137 "period_us": 100000, 00:21:22.137 "enable": false 00:21:22.137 } 00:21:22.137 }, 00:21:22.137 { 00:21:22.137 "method": "bdev_enable_histogram", 00:21:22.137 "params": { 00:21:22.137 "name": "nvme0n1", 00:21:22.137 "enable": true 00:21:22.137 } 00:21:22.137 }, 00:21:22.137 { 00:21:22.137 "method": "bdev_wait_for_examine" 00:21:22.137 } 00:21:22.137 ] 00:21:22.137 }, 00:21:22.137 { 00:21:22.137 "subsystem": "nbd", 00:21:22.137 "config": [] 00:21:22.137 } 00:21:22.137 ] 00:21:22.137 }' 00:21:22.137 16:12:57 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2331448 00:21:22.137 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2331448 ']' 00:21:22.137 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2331448 00:21:22.137 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:22.137 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:22.137 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2331448 00:21:22.137 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:22.137 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:22.137 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2331448' 00:21:22.137 killing process with pid 2331448 00:21:22.137 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2331448 00:21:22.137 Received shutdown signal, test time was about 1.000000 seconds 00:21:22.137 00:21:22.137 Latency(us) 00:21:22.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.137 =================================================================================================================== 00:21:22.137 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.137 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2331448 00:21:22.137 16:12:57 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2331416 00:21:22.137 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2331416 ']' 00:21:22.137 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2331416 00:21:22.137 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:22.137 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:22.137 16:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2331416 00:21:22.397 16:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:22.397 16:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:22.397 16:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2331416' 00:21:22.397 killing process with pid 2331416 00:21:22.397 16:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2331416 00:21:22.397 16:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2331416 00:21:22.397 16:12:58 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:22.397 16:12:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:22.397 16:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:22.397 16:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.397 16:12:58 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:22.397 "subsystems": [ 00:21:22.397 { 00:21:22.397 "subsystem": "keyring", 00:21:22.397 "config": [ 00:21:22.397 { 00:21:22.397 "method": "keyring_file_add_key", 00:21:22.397 "params": { 00:21:22.397 "name": "key0", 00:21:22.397 "path": "/tmp/tmp.WBGxBNxgOa" 00:21:22.397 } 00:21:22.397 } 00:21:22.397 ] 00:21:22.397 }, 00:21:22.397 { 00:21:22.397 "subsystem": "iobuf", 00:21:22.397 "config": [ 00:21:22.397 { 00:21:22.397 "method": "iobuf_set_options", 00:21:22.397 "params": { 00:21:22.397 "small_pool_count": 8192, 00:21:22.397 "large_pool_count": 1024, 00:21:22.397 "small_bufsize": 8192, 00:21:22.397 "large_bufsize": 135168 00:21:22.397 } 00:21:22.397 } 00:21:22.397 ] 00:21:22.397 }, 00:21:22.397 { 00:21:22.397 "subsystem": "sock", 00:21:22.397 "config": [ 00:21:22.397 { 00:21:22.397 "method": "sock_set_default_impl", 00:21:22.397 "params": { 00:21:22.397 "impl_name": "posix" 00:21:22.397 } 00:21:22.397 }, 00:21:22.397 { 00:21:22.397 "method": "sock_impl_set_options", 00:21:22.397 "params": { 00:21:22.397 "impl_name": "ssl", 00:21:22.397 "recv_buf_size": 4096, 00:21:22.397 "send_buf_size": 4096, 00:21:22.397 "enable_recv_pipe": true, 00:21:22.397 "enable_quickack": false, 00:21:22.397 "enable_placement_id": 0, 00:21:22.397 "enable_zerocopy_send_server": true, 00:21:22.397 "enable_zerocopy_send_client": false, 00:21:22.397 "zerocopy_threshold": 0, 00:21:22.397 "tls_version": 0, 00:21:22.397 "enable_ktls": false 00:21:22.397 } 00:21:22.397 }, 00:21:22.397 { 00:21:22.397 "method": "sock_impl_set_options", 00:21:22.397 "params": { 00:21:22.397 "impl_name": "posix", 00:21:22.397 "recv_buf_size": 2097152, 00:21:22.397 "send_buf_size": 2097152, 00:21:22.397 "enable_recv_pipe": true, 00:21:22.397 "enable_quickack": false, 00:21:22.397 "enable_placement_id": 0, 00:21:22.397 "enable_zerocopy_send_server": true, 00:21:22.397 "enable_zerocopy_send_client": false, 00:21:22.397 "zerocopy_threshold": 0, 00:21:22.397 "tls_version": 0, 00:21:22.397 "enable_ktls": false 00:21:22.397 } 00:21:22.397 } 00:21:22.397 ] 00:21:22.397 }, 00:21:22.397 { 00:21:22.397 "subsystem": "vmd", 00:21:22.397 "config": [] 00:21:22.397 }, 00:21:22.397 { 00:21:22.397 "subsystem": "accel", 00:21:22.397 "config": [ 00:21:22.397 { 00:21:22.397 "method": "accel_set_options", 00:21:22.397 "params": { 00:21:22.397 "small_cache_size": 128, 00:21:22.397 "large_cache_size": 16, 00:21:22.397 "task_count": 2048, 00:21:22.397 "sequence_count": 2048, 00:21:22.397 "buf_count": 2048 00:21:22.397 } 00:21:22.397 } 00:21:22.397 ] 00:21:22.397 }, 00:21:22.397 { 00:21:22.397 "subsystem": "bdev", 00:21:22.397 "config": [ 00:21:22.397 { 00:21:22.397 "method": "bdev_set_options", 00:21:22.397 "params": { 00:21:22.397 "bdev_io_pool_size": 65535, 00:21:22.397 "bdev_io_cache_size": 256, 00:21:22.397 "bdev_auto_examine": true, 00:21:22.397 "iobuf_small_cache_size": 128, 00:21:22.397 "iobuf_large_cache_size": 16 00:21:22.397 } 00:21:22.397 }, 00:21:22.397 { 00:21:22.397 "method": "bdev_raid_set_options", 00:21:22.397 "params": { 00:21:22.397 "process_window_size_kb": 1024 00:21:22.397 } 00:21:22.397 }, 00:21:22.397 { 00:21:22.397 "method": "bdev_iscsi_set_options", 00:21:22.397 "params": { 00:21:22.397 "timeout_sec": 30 00:21:22.397 } 00:21:22.397 }, 00:21:22.397 { 00:21:22.397 "method": "bdev_nvme_set_options", 00:21:22.397 "params": { 00:21:22.397 "action_on_timeout": "none", 00:21:22.397 "timeout_us": 0, 00:21:22.397 "timeout_admin_us": 0, 00:21:22.397 "keep_alive_timeout_ms": 10000, 00:21:22.397 "arbitration_burst": 0, 00:21:22.397 "low_priority_weight": 0, 00:21:22.397 "medium_priority_weight": 0, 00:21:22.397 "high_priority_weight": 0, 00:21:22.397 "nvme_adminq_poll_period_us": 10000, 00:21:22.397 "nvme_ioq_poll_period_us": 0, 00:21:22.397 "io_queue_requests": 0, 00:21:22.397 "delay_cmd_submit": true, 00:21:22.397 "transport_retry_count": 4, 00:21:22.397 "bdev_retry_count": 3, 00:21:22.397 "transport_ack_timeout": 0, 00:21:22.397 "ctrlr_loss_timeout_sec": 0, 00:21:22.397 "reconnect_delay_sec": 0, 00:21:22.397 "fast_io_fail_timeout_sec": 0, 00:21:22.397 "disable_auto_failback": false, 00:21:22.397 "generate_uuids": false, 00:21:22.397 "transport_tos": 0, 00:21:22.397 "nvme_error_stat": false, 00:21:22.397 "rdma_srq_size": 0, 00:21:22.397 "io_path_stat": false, 00:21:22.397 "allow_accel_sequence": false, 00:21:22.397 "rdma_max_cq_size": 0, 00:21:22.397 "rdma_cm_event_timeout_ms": 0, 00:21:22.397 "dhchap_digests": [ 00:21:22.397 "sha256", 00:21:22.397 "sha384", 00:21:22.397 "sha512" 00:21:22.397 ], 00:21:22.397 "dhchap_dhgroups": [ 00:21:22.397 "null", 00:21:22.397 "ffdhe2048", 00:21:22.397 "ffdhe3072", 00:21:22.397 "ffdhe4096", 00:21:22.397 "ffdhe6144", 00:21:22.397 "ffdhe8192" 00:21:22.397 ] 00:21:22.397 } 00:21:22.397 }, 00:21:22.397 { 00:21:22.397 "method": "bdev_nvme_set_hotplug", 00:21:22.397 "params": { 00:21:22.397 "period_us": 100000, 00:21:22.397 "enable": false 00:21:22.397 } 00:21:22.397 }, 00:21:22.397 { 00:21:22.397 "method": "bdev_malloc_create", 00:21:22.397 "params": { 00:21:22.397 "name": "malloc0", 00:21:22.397 "num_blocks": 8192, 00:21:22.397 "block_size": 4096, 00:21:22.397 "physical_block_size": 4096, 00:21:22.397 "uuid": "a8b5e1df-bc34-450d-a2d3-476a2a638313", 00:21:22.397 "optimal_io_boundary": 0 00:21:22.397 } 00:21:22.397 }, 00:21:22.397 { 00:21:22.397 "method": "bdev_wait_for_examine" 00:21:22.397 } 00:21:22.397 ] 00:21:22.397 }, 00:21:22.397 { 00:21:22.397 "subsystem": "nbd", 00:21:22.397 "config": [] 00:21:22.397 }, 00:21:22.397 { 00:21:22.398 "subsystem": "scheduler", 00:21:22.398 "config": [ 00:21:22.398 { 00:21:22.398 "method": "framework_set_scheduler", 00:21:22.398 "params": { 00:21:22.398 "name": "static" 00:21:22.398 } 00:21:22.398 } 00:21:22.398 ] 00:21:22.398 }, 00:21:22.398 { 00:21:22.398 "subsystem": "nvmf", 00:21:22.398 "config": [ 00:21:22.398 { 00:21:22.398 "method": "nvmf_set_config", 00:21:22.398 "params": { 00:21:22.398 "discovery_filter": "match_any", 00:21:22.398 "admin_cmd_passthru": { 00:21:22.398 "identify_ctrlr": false 00:21:22.398 } 00:21:22.398 } 00:21:22.398 }, 00:21:22.398 { 00:21:22.398 "method": "nvmf_set_max_subsystems", 00:21:22.398 "params": { 00:21:22.398 "max_subsystems": 1024 00:21:22.398 } 00:21:22.398 }, 00:21:22.398 { 00:21:22.398 "method": "nvmf_set_crdt", 00:21:22.398 "params": { 00:21:22.398 "crdt1": 0, 00:21:22.398 "crdt2": 0, 00:21:22.398 "crdt3": 0 00:21:22.398 } 00:21:22.398 }, 00:21:22.398 { 00:21:22.398 "method": "nvmf_create_transport", 00:21:22.398 "params": { 00:21:22.398 "trtype": "TCP", 00:21:22.398 "max_queue_depth": 128, 00:21:22.398 "max_io_qpairs_per_ctrlr": 127, 00:21:22.398 "in_capsule_data_size": 4096, 00:21:22.398 "max_io_size": 131072, 00:21:22.398 "io_unit_size": 131072, 00:21:22.398 "max_aq_depth": 128, 00:21:22.398 "num_shared_buffers": 511, 00:21:22.398 "buf_cache_size": 4294967295, 00:21:22.398 "dif_insert_or_strip": false, 00:21:22.398 "zcopy": false, 00:21:22.398 "c2h_success": false, 00:21:22.398 "sock_priority": 0, 00:21:22.398 "abort_timeout_sec": 1, 00:21:22.398 "ack_timeout": 0, 00:21:22.398 "data_wr_pool_size": 0 00:21:22.398 } 00:21:22.398 }, 00:21:22.398 { 00:21:22.398 "method": "nvmf_create_subsystem", 00:21:22.398 "params": { 00:21:22.398 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.398 "allow_any_host": false, 00:21:22.398 "serial_number": "00000000000000000000", 00:21:22.398 "model_number": "SPDK bdev Controller", 00:21:22.398 "max_namespaces": 32, 00:21:22.398 "min_cntlid": 1, 00:21:22.398 "max_cntlid": 65519, 00:21:22.398 "ana_reporting": false 00:21:22.398 } 00:21:22.398 }, 00:21:22.398 { 00:21:22.398 "method": "nvmf_subsystem_add_host", 00:21:22.398 "params": { 00:21:22.398 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.398 "host": "nqn.2016-06.io.spdk:host1", 00:21:22.398 "psk": "key0" 00:21:22.398 } 00:21:22.398 }, 00:21:22.398 { 00:21:22.398 "method": "nvmf_subsystem_add_ns", 00:21:22.398 "params": { 00:21:22.398 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.398 "namespace": { 00:21:22.398 "nsid": 1, 00:21:22.398 "bdev_name": "malloc0", 00:21:22.398 "nguid": "A8B5E1DFBC34450DA2D3476A2A638313", 00:21:22.398 "uuid": "a8b5e1df-bc34-450d-a2d3-476a2a638313", 00:21:22.398 "no_auto_visible": false 00:21:22.398 } 00:21:22.398 } 00:21:22.398 }, 00:21:22.398 { 00:21:22.398 "method": "nvmf_subsystem_add_listener", 00:21:22.398 "params": { 00:21:22.398 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.398 "listen_address": { 00:21:22.398 "trtype": "TCP", 00:21:22.398 "adrfam": "IPv4", 00:21:22.398 "traddr": "10.0.0.2", 00:21:22.398 "trsvcid": "4420" 00:21:22.398 }, 00:21:22.398 "secure_channel": true 00:21:22.398 } 00:21:22.398 } 00:21:22.398 ] 00:21:22.398 } 00:21:22.398 ] 00:21:22.398 }' 00:21:22.398 16:12:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2332133 00:21:22.398 16:12:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2332133 00:21:22.398 16:12:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:22.398 16:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2332133 ']' 00:21:22.398 16:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.398 16:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.398 16:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.398 16:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.398 16:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.398 [2024-07-15 16:12:58.214275] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:21:22.398 [2024-07-15 16:12:58.214332] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.658 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.658 [2024-07-15 16:12:58.277955] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.658 [2024-07-15 16:12:58.342488] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.658 [2024-07-15 16:12:58.342524] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.658 [2024-07-15 16:12:58.342531] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.658 [2024-07-15 16:12:58.342538] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.658 [2024-07-15 16:12:58.342543] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.658 [2024-07-15 16:12:58.342593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.918 [2024-07-15 16:12:58.539541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.918 [2024-07-15 16:12:58.571556] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.918 [2024-07-15 16:12:58.584285] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.178 16:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.178 16:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:23.178 16:12:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.178 16:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:23.178 16:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.178 16:12:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.439 16:12:59 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2332356 00:21:23.439 16:12:59 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2332356 /var/tmp/bdevperf.sock 00:21:23.439 16:12:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2332356 ']' 00:21:23.439 16:12:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.439 16:12:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.439 16:12:59 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:23.439 16:12:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.439 16:12:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.439 16:12:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.439 16:12:59 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:23.439 "subsystems": [ 00:21:23.439 { 00:21:23.439 "subsystem": "keyring", 00:21:23.439 "config": [ 00:21:23.439 { 00:21:23.439 "method": "keyring_file_add_key", 00:21:23.439 "params": { 00:21:23.439 "name": "key0", 00:21:23.439 "path": "/tmp/tmp.WBGxBNxgOa" 00:21:23.439 } 00:21:23.439 } 00:21:23.439 ] 00:21:23.439 }, 00:21:23.439 { 00:21:23.439 "subsystem": "iobuf", 00:21:23.439 "config": [ 00:21:23.439 { 00:21:23.439 "method": "iobuf_set_options", 00:21:23.439 "params": { 00:21:23.439 "small_pool_count": 8192, 00:21:23.439 "large_pool_count": 1024, 00:21:23.439 "small_bufsize": 8192, 00:21:23.439 "large_bufsize": 135168 00:21:23.439 } 00:21:23.439 } 00:21:23.439 ] 00:21:23.439 }, 00:21:23.439 { 00:21:23.439 "subsystem": "sock", 00:21:23.439 "config": [ 00:21:23.439 { 00:21:23.439 "method": "sock_set_default_impl", 00:21:23.439 "params": { 00:21:23.439 "impl_name": "posix" 00:21:23.439 } 00:21:23.439 }, 00:21:23.439 { 00:21:23.439 "method": "sock_impl_set_options", 00:21:23.439 "params": { 00:21:23.439 "impl_name": "ssl", 00:21:23.439 "recv_buf_size": 4096, 00:21:23.439 "send_buf_size": 4096, 00:21:23.439 "enable_recv_pipe": true, 00:21:23.439 "enable_quickack": false, 00:21:23.439 "enable_placement_id": 0, 00:21:23.439 "enable_zerocopy_send_server": true, 00:21:23.439 "enable_zerocopy_send_client": false, 00:21:23.439 "zerocopy_threshold": 0, 00:21:23.439 "tls_version": 0, 00:21:23.439 "enable_ktls": false 00:21:23.439 } 00:21:23.439 }, 00:21:23.439 { 00:21:23.439 "method": "sock_impl_set_options", 00:21:23.439 "params": { 00:21:23.439 "impl_name": "posix", 00:21:23.439 "recv_buf_size": 2097152, 00:21:23.439 "send_buf_size": 2097152, 00:21:23.439 "enable_recv_pipe": true, 00:21:23.439 "enable_quickack": false, 00:21:23.439 "enable_placement_id": 0, 00:21:23.439 "enable_zerocopy_send_server": true, 00:21:23.439 "enable_zerocopy_send_client": false, 00:21:23.439 "zerocopy_threshold": 0, 00:21:23.439 "tls_version": 0, 00:21:23.439 "enable_ktls": false 00:21:23.439 } 00:21:23.439 } 00:21:23.439 ] 00:21:23.439 }, 00:21:23.439 { 00:21:23.439 "subsystem": "vmd", 00:21:23.439 "config": [] 00:21:23.439 }, 00:21:23.439 { 00:21:23.439 "subsystem": "accel", 00:21:23.439 "config": [ 00:21:23.439 { 00:21:23.439 "method": "accel_set_options", 00:21:23.439 "params": { 00:21:23.439 "small_cache_size": 128, 00:21:23.439 "large_cache_size": 16, 00:21:23.439 "task_count": 2048, 00:21:23.439 "sequence_count": 2048, 00:21:23.439 "buf_count": 2048 00:21:23.439 } 00:21:23.439 } 00:21:23.439 ] 00:21:23.439 }, 00:21:23.439 { 00:21:23.439 "subsystem": "bdev", 00:21:23.439 "config": [ 00:21:23.439 { 00:21:23.439 "method": "bdev_set_options", 00:21:23.439 "params": { 00:21:23.439 "bdev_io_pool_size": 65535, 00:21:23.439 "bdev_io_cache_size": 256, 00:21:23.439 "bdev_auto_examine": true, 00:21:23.439 "iobuf_small_cache_size": 128, 00:21:23.439 "iobuf_large_cache_size": 16 00:21:23.439 } 00:21:23.439 }, 00:21:23.439 { 00:21:23.439 "method": "bdev_raid_set_options", 00:21:23.439 "params": { 00:21:23.439 "process_window_size_kb": 1024 00:21:23.439 } 00:21:23.439 }, 00:21:23.439 { 00:21:23.439 "method": "bdev_iscsi_set_options", 00:21:23.439 "params": { 00:21:23.439 "timeout_sec": 30 00:21:23.439 } 00:21:23.439 }, 00:21:23.439 { 00:21:23.439 "method": "bdev_nvme_set_options", 00:21:23.439 "params": { 00:21:23.439 "action_on_timeout": "none", 00:21:23.439 "timeout_us": 0, 00:21:23.439 "timeout_admin_us": 0, 00:21:23.440 "keep_alive_timeout_ms": 10000, 00:21:23.440 "arbitration_burst": 0, 00:21:23.440 "low_priority_weight": 0, 00:21:23.440 "medium_priority_weight": 0, 00:21:23.440 "high_priority_weight": 0, 00:21:23.440 "nvme_adminq_poll_period_us": 10000, 00:21:23.440 "nvme_ioq_poll_period_us": 0, 00:21:23.440 "io_queue_requests": 512, 00:21:23.440 "delay_cmd_submit": true, 00:21:23.440 "transport_retry_count": 4, 00:21:23.440 "bdev_retry_count": 3, 00:21:23.440 "transport_ack_timeout": 0, 00:21:23.440 "ctrlr_loss_timeout_sec": 0, 00:21:23.440 "reconnect_delay_sec": 0, 00:21:23.440 "fast_io_fail_timeout_sec": 0, 00:21:23.440 "disable_auto_failback": false, 00:21:23.440 "generate_uuids": false, 00:21:23.440 "transport_tos": 0, 00:21:23.440 "nvme_error_stat": false, 00:21:23.440 "rdma_srq_size": 0, 00:21:23.440 "io_path_stat": false, 00:21:23.440 "allow_accel_sequence": false, 00:21:23.440 "rdma_max_cq_size": 0, 00:21:23.440 "rdma_cm_event_timeout_ms": 0, 00:21:23.440 "dhchap_digests": [ 00:21:23.440 "sha256", 00:21:23.440 "sha384", 00:21:23.440 "sha512" 00:21:23.440 ], 00:21:23.440 "dhchap_dhgroups": [ 00:21:23.440 "null", 00:21:23.440 "ffdhe2048", 00:21:23.440 "ffdhe3072", 00:21:23.440 "ffdhe4096", 00:21:23.440 "ffdhe6144", 00:21:23.440 "ffdhe8192" 00:21:23.440 ] 00:21:23.440 } 00:21:23.440 }, 00:21:23.440 { 00:21:23.440 "method": "bdev_nvme_attach_controller", 00:21:23.440 "params": { 00:21:23.440 "name": "nvme0", 00:21:23.440 "trtype": "TCP", 00:21:23.440 "adrfam": "IPv4", 00:21:23.440 "traddr": "10.0.0.2", 00:21:23.440 "trsvcid": "4420", 00:21:23.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.440 "prchk_reftag": false, 00:21:23.440 "prchk_guard": false, 00:21:23.440 "ctrlr_loss_timeout_sec": 0, 00:21:23.440 "reconnect_delay_sec": 0, 00:21:23.440 "fast_io_fail_timeout_sec": 0, 00:21:23.440 "psk": "key0", 00:21:23.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.440 "hdgst": false, 00:21:23.440 "ddgst": false 00:21:23.440 } 00:21:23.440 }, 00:21:23.440 { 00:21:23.440 "method": "bdev_nvme_set_hotplug", 00:21:23.440 "params": { 00:21:23.440 "period_us": 100000, 00:21:23.440 "enable": false 00:21:23.440 } 00:21:23.440 }, 00:21:23.440 { 00:21:23.440 "method": "bdev_enable_histogram", 00:21:23.440 "params": { 00:21:23.440 "name": "nvme0n1", 00:21:23.440 "enable": true 00:21:23.440 } 00:21:23.440 }, 00:21:23.440 { 00:21:23.440 "method": "bdev_wait_for_examine" 00:21:23.440 } 00:21:23.440 ] 00:21:23.440 }, 00:21:23.440 { 00:21:23.440 "subsystem": "nbd", 00:21:23.440 "config": [] 00:21:23.440 } 00:21:23.440 ] 00:21:23.440 }' 00:21:23.440 [2024-07-15 16:12:59.067414] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:21:23.440 [2024-07-15 16:12:59.067463] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2332356 ] 00:21:23.440 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.440 [2024-07-15 16:12:59.141342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.440 [2024-07-15 16:12:59.195144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.700 [2024-07-15 16:12:59.328488] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.271 16:12:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.271 16:12:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:24.271 16:12:59 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:24.271 16:12:59 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:24.271 16:13:00 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.271 16:13:00 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:24.271 Running I/O for 1 seconds... 00:21:25.695 00:21:25.695 Latency(us) 00:21:25.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.695 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:25.695 Verification LBA range: start 0x0 length 0x2000 00:21:25.695 nvme0n1 : 1.04 2968.84 11.60 0.00 0.00 42264.47 5543.25 61603.84 00:21:25.695 =================================================================================================================== 00:21:25.695 Total : 2968.84 11.60 0.00 0.00 42264.47 5543.25 61603.84 00:21:25.695 0 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:25.695 nvmf_trace.0 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2332356 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2332356 ']' 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2332356 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2332356 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2332356' 00:21:25.695 killing process with pid 2332356 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2332356 00:21:25.695 Received shutdown signal, test time was about 1.000000 seconds 00:21:25.695 00:21:25.695 Latency(us) 00:21:25.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.695 =================================================================================================================== 00:21:25.695 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2332356 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:25.695 rmmod nvme_tcp 00:21:25.695 rmmod nvme_fabrics 00:21:25.695 rmmod nvme_keyring 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2332133 ']' 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2332133 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2332133 ']' 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2332133 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2332133 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2332133' 00:21:25.695 killing process with pid 2332133 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2332133 00:21:25.695 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2332133 00:21:25.956 16:13:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:25.956 16:13:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:25.956 16:13:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:25.956 16:13:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.956 16:13:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:25.956 16:13:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.956 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.956 16:13:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.503 16:13:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:28.503 16:13:03 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.2wphDcBoYa /tmp/tmp.VQu3SRxPr2 /tmp/tmp.WBGxBNxgOa 00:21:28.503 00:21:28.503 real 1m22.937s 00:21:28.503 user 2m5.378s 00:21:28.503 sys 0m28.914s 00:21:28.503 16:13:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:28.503 16:13:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.503 ************************************ 00:21:28.503 END TEST nvmf_tls 00:21:28.503 ************************************ 00:21:28.503 16:13:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:28.503 16:13:03 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:28.503 16:13:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:28.503 16:13:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:28.503 16:13:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:28.503 ************************************ 00:21:28.503 START TEST nvmf_fips 00:21:28.503 ************************************ 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:28.503 * Looking for test storage... 00:21:28.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:28.503 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:28.504 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:28.504 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:28.504 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:28.504 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:28.504 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:28.504 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:28.504 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:28.504 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:28.504 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:28.504 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:28.504 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:28.504 16:13:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:28.504 Error setting digest 00:21:28.504 0042529EFB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:28.504 0042529EFB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:28.504 16:13:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:36.645 16:13:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:36.645 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:36.645 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:36.645 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:36.645 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:36.646 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:36.646 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:36.646 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:36.646 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:36.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:21:36.646 00:21:36.646 --- 10.0.0.2 ping statistics --- 00:21:36.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.646 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:36.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.502 ms 00:21:36.646 00:21:36.646 --- 10.0.0.1 ping statistics --- 00:21:36.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.646 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2337031 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2337031 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2337031 ']' 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.646 16:13:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:36.646 [2024-07-15 16:13:11.454815] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:21:36.646 [2024-07-15 16:13:11.454885] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.646 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.646 [2024-07-15 16:13:11.543729] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.646 [2024-07-15 16:13:11.637926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.646 [2024-07-15 16:13:11.637982] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.646 [2024-07-15 16:13:11.637990] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.646 [2024-07-15 16:13:11.637998] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.646 [2024-07-15 16:13:11.638004] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.646 [2024-07-15 16:13:11.638038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.646 16:13:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.646 16:13:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:36.646 16:13:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.646 16:13:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:36.646 16:13:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:36.646 16:13:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.646 16:13:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:36.646 16:13:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:36.646 16:13:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:36.646 16:13:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:36.646 16:13:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:36.646 16:13:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:36.646 16:13:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:36.646 16:13:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:36.646 [2024-07-15 16:13:12.424216] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.646 [2024-07-15 16:13:12.440208] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:36.646 [2024-07-15 16:13:12.440437] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.646 [2024-07-15 16:13:12.470518] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:36.646 malloc0 00:21:36.907 16:13:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:36.907 16:13:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2337214 00:21:36.907 16:13:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2337214 /var/tmp/bdevperf.sock 00:21:36.907 16:13:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:36.907 16:13:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2337214 ']' 00:21:36.907 16:13:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.907 16:13:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.907 16:13:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.907 16:13:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.907 16:13:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:36.907 [2024-07-15 16:13:12.562713] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:21:36.907 [2024-07-15 16:13:12.562790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2337214 ] 00:21:36.907 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.907 [2024-07-15 16:13:12.619447] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.907 [2024-07-15 16:13:12.682996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.847 16:13:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.847 16:13:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:37.847 16:13:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:37.847 [2024-07-15 16:13:13.474785] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:37.847 [2024-07-15 16:13:13.474851] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:37.847 TLSTESTn1 00:21:37.847 16:13:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:37.847 Running I/O for 10 seconds... 00:21:50.072 00:21:50.072 Latency(us) 00:21:50.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.072 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:50.072 Verification LBA range: start 0x0 length 0x2000 00:21:50.072 TLSTESTn1 : 10.08 2492.33 9.74 0.00 0.00 51179.73 6198.61 123207.68 00:21:50.072 =================================================================================================================== 00:21:50.072 Total : 2492.33 9.74 0.00 0.00 51179.73 6198.61 123207.68 00:21:50.072 0 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:50.072 nvmf_trace.0 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2337214 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2337214 ']' 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2337214 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2337214 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2337214' 00:21:50.072 killing process with pid 2337214 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2337214 00:21:50.072 Received shutdown signal, test time was about 10.000000 seconds 00:21:50.072 00:21:50.072 Latency(us) 00:21:50.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.072 =================================================================================================================== 00:21:50.072 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.072 [2024-07-15 16:13:23.927625] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:50.072 16:13:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2337214 00:21:50.072 16:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:50.072 16:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:50.072 16:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:50.072 16:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:50.072 16:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:50.072 16:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.072 16:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:50.072 rmmod nvme_tcp 00:21:50.072 rmmod nvme_fabrics 00:21:50.072 rmmod nvme_keyring 00:21:50.072 16:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.072 16:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:50.072 16:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2337031 ']' 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2337031 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2337031 ']' 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2337031 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2337031 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2337031' 00:21:50.073 killing process with pid 2337031 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2337031 00:21:50.073 [2024-07-15 16:13:24.154623] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2337031 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.073 16:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.643 16:13:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:50.644 16:13:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:50.644 00:21:50.644 real 0m22.524s 00:21:50.644 user 0m23.152s 00:21:50.644 sys 0m10.106s 00:21:50.644 16:13:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:50.644 16:13:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:50.644 ************************************ 00:21:50.644 END TEST nvmf_fips 00:21:50.644 ************************************ 00:21:50.644 16:13:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:50.644 16:13:26 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:50.644 16:13:26 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:50.644 16:13:26 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:50.644 16:13:26 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:50.644 16:13:26 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:50.644 16:13:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:58.814 16:13:33 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:58.815 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:58.815 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:58.815 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:58.815 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:58.815 16:13:33 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:58.815 16:13:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:58.815 16:13:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:58.815 16:13:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:58.815 ************************************ 00:21:58.815 START TEST nvmf_perf_adq 00:21:58.815 ************************************ 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:58.815 * Looking for test storage... 00:21:58.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:58.815 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:58.816 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.816 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.816 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:58.816 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:58.816 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:58.816 16:13:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:58.816 16:13:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:58.816 16:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:04.144 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:04.144 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:04.144 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:04.144 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:04.144 16:13:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:05.529 16:13:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:07.441 16:13:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:12.780 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:12.780 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:12.780 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:12.781 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:12.781 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:12.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:22:12.781 00:22:12.781 --- 10.0.0.2 ping statistics --- 00:22:12.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.781 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.385 ms 00:22:12.781 00:22:12.781 --- 10.0.0.1 ping statistics --- 00:22:12.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.781 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2348972 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2348972 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2348972 ']' 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.781 16:13:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:12.781 [2024-07-15 16:13:48.589100] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:22:12.781 [2024-07-15 16:13:48.589174] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.042 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.042 [2024-07-15 16:13:48.662082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:13.042 [2024-07-15 16:13:48.739779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.042 [2024-07-15 16:13:48.739820] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.042 [2024-07-15 16:13:48.739828] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.042 [2024-07-15 16:13:48.739834] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.042 [2024-07-15 16:13:48.739840] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.042 [2024-07-15 16:13:48.739978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.042 [2024-07-15 16:13:48.740097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.042 [2024-07-15 16:13:48.740256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:13.042 [2024-07-15 16:13:48.740350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.612 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.612 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:13.612 16:13:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:13.613 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:13.613 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.613 16:13:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.613 16:13:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:13.613 16:13:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:13.613 16:13:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:13.613 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.613 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.613 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.873 [2024-07-15 16:13:49.546145] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.873 Malloc1 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.873 [2024-07-15 16:13:49.605544] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2349132 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:13.873 16:13:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:13.873 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.784 16:13:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:15.784 16:13:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.784 16:13:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.045 16:13:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.045 16:13:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:16.045 "tick_rate": 2400000000, 00:22:16.045 "poll_groups": [ 00:22:16.045 { 00:22:16.045 "name": "nvmf_tgt_poll_group_000", 00:22:16.045 "admin_qpairs": 1, 00:22:16.045 "io_qpairs": 1, 00:22:16.045 "current_admin_qpairs": 1, 00:22:16.045 "current_io_qpairs": 1, 00:22:16.045 "pending_bdev_io": 0, 00:22:16.045 "completed_nvme_io": 20206, 00:22:16.045 "transports": [ 00:22:16.045 { 00:22:16.045 "trtype": "TCP" 00:22:16.045 } 00:22:16.045 ] 00:22:16.045 }, 00:22:16.045 { 00:22:16.045 "name": "nvmf_tgt_poll_group_001", 00:22:16.045 "admin_qpairs": 0, 00:22:16.045 "io_qpairs": 1, 00:22:16.045 "current_admin_qpairs": 0, 00:22:16.045 "current_io_qpairs": 1, 00:22:16.045 "pending_bdev_io": 0, 00:22:16.045 "completed_nvme_io": 29884, 00:22:16.046 "transports": [ 00:22:16.046 { 00:22:16.046 "trtype": "TCP" 00:22:16.046 } 00:22:16.046 ] 00:22:16.046 }, 00:22:16.046 { 00:22:16.046 "name": "nvmf_tgt_poll_group_002", 00:22:16.046 "admin_qpairs": 0, 00:22:16.046 "io_qpairs": 1, 00:22:16.046 "current_admin_qpairs": 0, 00:22:16.046 "current_io_qpairs": 1, 00:22:16.046 "pending_bdev_io": 0, 00:22:16.046 "completed_nvme_io": 21010, 00:22:16.046 "transports": [ 00:22:16.046 { 00:22:16.046 "trtype": "TCP" 00:22:16.046 } 00:22:16.046 ] 00:22:16.046 }, 00:22:16.046 { 00:22:16.046 "name": "nvmf_tgt_poll_group_003", 00:22:16.046 "admin_qpairs": 0, 00:22:16.046 "io_qpairs": 1, 00:22:16.046 "current_admin_qpairs": 0, 00:22:16.046 "current_io_qpairs": 1, 00:22:16.046 "pending_bdev_io": 0, 00:22:16.046 "completed_nvme_io": 21079, 00:22:16.046 "transports": [ 00:22:16.046 { 00:22:16.046 "trtype": "TCP" 00:22:16.046 } 00:22:16.046 ] 00:22:16.046 } 00:22:16.046 ] 00:22:16.046 }' 00:22:16.046 16:13:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:16.046 16:13:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:16.046 16:13:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:16.046 16:13:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:16.046 16:13:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2349132 00:22:24.186 Initializing NVMe Controllers 00:22:24.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:24.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:24.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:24.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:24.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:24.186 Initialization complete. Launching workers. 00:22:24.186 ======================================================== 00:22:24.186 Latency(us) 00:22:24.186 Device Information : IOPS MiB/s Average min max 00:22:24.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11623.30 45.40 5506.30 1747.06 9611.07 00:22:24.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15232.30 59.50 4201.50 1420.49 9514.49 00:22:24.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13758.60 53.74 4651.54 1290.95 10801.79 00:22:24.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13753.90 53.73 4652.81 1421.34 10840.89 00:22:24.186 ======================================================== 00:22:24.186 Total : 54368.09 212.38 4708.51 1290.95 10840.89 00:22:24.186 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:24.186 rmmod nvme_tcp 00:22:24.186 rmmod nvme_fabrics 00:22:24.186 rmmod nvme_keyring 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2348972 ']' 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2348972 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2348972 ']' 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2348972 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2348972 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2348972' 00:22:24.186 killing process with pid 2348972 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2348972 00:22:24.186 16:13:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2348972 00:22:24.186 16:14:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:24.186 16:14:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:24.186 16:14:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:24.186 16:14:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:24.186 16:14:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:24.186 16:14:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.186 16:14:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:24.186 16:14:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.732 16:14:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:26.732 16:14:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:26.732 16:14:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:28.161 16:14:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:30.065 16:14:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:35.349 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:35.349 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:35.349 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:35.349 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:35.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:22:35.349 00:22:35.349 --- 10.0.0.2 ping statistics --- 00:22:35.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.349 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:22:35.349 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:22:35.349 00:22:35.349 --- 10.0.0.1 ping statistics --- 00:22:35.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.349 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:22:35.350 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.350 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:35.350 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:35.350 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.350 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:35.350 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:35.350 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.350 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:35.350 16:14:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:35.350 16:14:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:35.350 16:14:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:35.350 16:14:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:35.350 16:14:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:35.350 net.core.busy_poll = 1 00:22:35.350 16:14:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:35.350 net.core.busy_read = 1 00:22:35.350 16:14:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:35.350 16:14:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:35.350 16:14:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:35.350 16:14:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:35.350 16:14:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:35.610 16:14:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:35.610 16:14:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:35.610 16:14:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:35.610 16:14:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.610 16:14:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2353819 00:22:35.610 16:14:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2353819 00:22:35.610 16:14:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:35.610 16:14:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2353819 ']' 00:22:35.610 16:14:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.610 16:14:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.610 16:14:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.610 16:14:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.610 16:14:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.610 [2024-07-15 16:14:11.273742] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:22:35.610 [2024-07-15 16:14:11.273826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.610 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.610 [2024-07-15 16:14:11.349469] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:35.610 [2024-07-15 16:14:11.426943] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.610 [2024-07-15 16:14:11.426980] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.610 [2024-07-15 16:14:11.426988] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.610 [2024-07-15 16:14:11.426994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.610 [2024-07-15 16:14:11.427000] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.610 [2024-07-15 16:14:11.427184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.610 [2024-07-15 16:14:11.427438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.610 [2024-07-15 16:14:11.427601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.610 [2024-07-15 16:14:11.427600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.552 [2024-07-15 16:14:12.225429] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.552 Malloc1 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.552 [2024-07-15 16:14:12.284806] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2353944 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:36.552 16:14:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:36.552 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.463 16:14:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:38.463 16:14:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.463 16:14:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.722 16:14:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.722 16:14:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:38.722 "tick_rate": 2400000000, 00:22:38.722 "poll_groups": [ 00:22:38.722 { 00:22:38.722 "name": "nvmf_tgt_poll_group_000", 00:22:38.722 "admin_qpairs": 1, 00:22:38.722 "io_qpairs": 1, 00:22:38.722 "current_admin_qpairs": 1, 00:22:38.722 "current_io_qpairs": 1, 00:22:38.722 "pending_bdev_io": 0, 00:22:38.722 "completed_nvme_io": 27066, 00:22:38.722 "transports": [ 00:22:38.722 { 00:22:38.722 "trtype": "TCP" 00:22:38.722 } 00:22:38.722 ] 00:22:38.722 }, 00:22:38.722 { 00:22:38.722 "name": "nvmf_tgt_poll_group_001", 00:22:38.722 "admin_qpairs": 0, 00:22:38.722 "io_qpairs": 3, 00:22:38.722 "current_admin_qpairs": 0, 00:22:38.722 "current_io_qpairs": 3, 00:22:38.722 "pending_bdev_io": 0, 00:22:38.722 "completed_nvme_io": 43267, 00:22:38.722 "transports": [ 00:22:38.722 { 00:22:38.722 "trtype": "TCP" 00:22:38.722 } 00:22:38.722 ] 00:22:38.722 }, 00:22:38.722 { 00:22:38.722 "name": "nvmf_tgt_poll_group_002", 00:22:38.722 "admin_qpairs": 0, 00:22:38.722 "io_qpairs": 0, 00:22:38.722 "current_admin_qpairs": 0, 00:22:38.722 "current_io_qpairs": 0, 00:22:38.722 "pending_bdev_io": 0, 00:22:38.722 "completed_nvme_io": 0, 00:22:38.722 "transports": [ 00:22:38.722 { 00:22:38.722 "trtype": "TCP" 00:22:38.722 } 00:22:38.722 ] 00:22:38.722 }, 00:22:38.722 { 00:22:38.722 "name": "nvmf_tgt_poll_group_003", 00:22:38.722 "admin_qpairs": 0, 00:22:38.722 "io_qpairs": 0, 00:22:38.722 "current_admin_qpairs": 0, 00:22:38.722 "current_io_qpairs": 0, 00:22:38.722 "pending_bdev_io": 0, 00:22:38.722 "completed_nvme_io": 0, 00:22:38.722 "transports": [ 00:22:38.722 { 00:22:38.722 "trtype": "TCP" 00:22:38.722 } 00:22:38.722 ] 00:22:38.722 } 00:22:38.722 ] 00:22:38.722 }' 00:22:38.722 16:14:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:38.722 16:14:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:38.722 16:14:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:38.722 16:14:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:38.722 16:14:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2353944 00:22:46.859 Initializing NVMe Controllers 00:22:46.859 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:46.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:46.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:46.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:46.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:46.859 Initialization complete. Launching workers. 00:22:46.859 ======================================================== 00:22:46.859 Latency(us) 00:22:46.859 Device Information : IOPS MiB/s Average min max 00:22:46.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8697.50 33.97 7360.18 1454.00 53568.73 00:22:46.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5921.30 23.13 10808.00 1607.23 56169.36 00:22:46.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 18013.70 70.37 3559.61 1201.25 43732.41 00:22:46.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7183.20 28.06 8911.53 1391.02 54025.57 00:22:46.859 ======================================================== 00:22:46.859 Total : 39815.70 155.53 6433.34 1201.25 56169.36 00:22:46.859 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:46.859 rmmod nvme_tcp 00:22:46.859 rmmod nvme_fabrics 00:22:46.859 rmmod nvme_keyring 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2353819 ']' 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2353819 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2353819 ']' 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2353819 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2353819 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2353819' 00:22:46.859 killing process with pid 2353819 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2353819 00:22:46.859 16:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2353819 00:22:47.119 16:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:47.119 16:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:47.119 16:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:47.119 16:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:47.119 16:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:47.119 16:14:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.119 16:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.119 16:14:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.417 16:14:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:50.417 16:14:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:50.417 00:22:50.417 real 0m52.503s 00:22:50.417 user 2m47.589s 00:22:50.417 sys 0m10.934s 00:22:50.417 16:14:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:50.417 16:14:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.417 ************************************ 00:22:50.417 END TEST nvmf_perf_adq 00:22:50.417 ************************************ 00:22:50.417 16:14:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:50.417 16:14:25 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:50.417 16:14:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:50.417 16:14:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:50.417 16:14:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:50.417 ************************************ 00:22:50.417 START TEST nvmf_shutdown 00:22:50.417 ************************************ 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:50.417 * Looking for test storage... 00:22:50.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.417 16:14:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.417 16:14:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.417 16:14:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.417 16:14:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.417 16:14:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.417 16:14:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.417 16:14:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.417 16:14:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:50.418 ************************************ 00:22:50.418 START TEST nvmf_shutdown_tc1 00:22:50.418 ************************************ 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:50.418 16:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:57.078 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:57.078 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:57.078 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:57.079 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:57.079 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:57.079 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:57.340 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.340 16:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.340 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.340 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.340 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:57.340 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.340 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.340 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.609 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:57.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:22:57.610 00:22:57.610 --- 10.0.0.2 ping statistics --- 00:22:57.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.610 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:22:57.610 00:22:57.610 --- 10.0.0.1 ping statistics --- 00:22:57.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.610 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2360409 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2360409 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2360409 ']' 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.610 16:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:57.610 [2024-07-15 16:14:33.337270] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:22:57.610 [2024-07-15 16:14:33.337335] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.610 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.610 [2024-07-15 16:14:33.425073] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.874 [2024-07-15 16:14:33.520132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.874 [2024-07-15 16:14:33.520186] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.874 [2024-07-15 16:14:33.520194] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.874 [2024-07-15 16:14:33.520201] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.874 [2024-07-15 16:14:33.520207] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.874 [2024-07-15 16:14:33.520341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.874 [2024-07-15 16:14:33.520509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.874 [2024-07-15 16:14:33.520666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.874 [2024-07-15 16:14:33.520666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:58.446 [2024-07-15 16:14:34.162611] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:58.446 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:58.447 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.447 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:58.447 Malloc1 00:22:58.447 [2024-07-15 16:14:34.266050] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.447 Malloc2 00:22:58.707 Malloc3 00:22:58.707 Malloc4 00:22:58.707 Malloc5 00:22:58.707 Malloc6 00:22:58.707 Malloc7 00:22:58.707 Malloc8 00:22:58.970 Malloc9 00:22:58.970 Malloc10 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2360790 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2360790 /var/tmp/bdevperf.sock 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2360790 ']' 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.970 { 00:22:58.970 "params": { 00:22:58.970 "name": "Nvme$subsystem", 00:22:58.970 "trtype": "$TEST_TRANSPORT", 00:22:58.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.970 "adrfam": "ipv4", 00:22:58.970 "trsvcid": "$NVMF_PORT", 00:22:58.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.970 "hdgst": ${hdgst:-false}, 00:22:58.970 "ddgst": ${ddgst:-false} 00:22:58.970 }, 00:22:58.970 "method": "bdev_nvme_attach_controller" 00:22:58.970 } 00:22:58.970 EOF 00:22:58.970 )") 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.970 { 00:22:58.970 "params": { 00:22:58.970 "name": "Nvme$subsystem", 00:22:58.970 "trtype": "$TEST_TRANSPORT", 00:22:58.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.970 "adrfam": "ipv4", 00:22:58.970 "trsvcid": "$NVMF_PORT", 00:22:58.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.970 "hdgst": ${hdgst:-false}, 00:22:58.970 "ddgst": ${ddgst:-false} 00:22:58.970 }, 00:22:58.970 "method": "bdev_nvme_attach_controller" 00:22:58.970 } 00:22:58.970 EOF 00:22:58.970 )") 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.970 { 00:22:58.970 "params": { 00:22:58.970 "name": "Nvme$subsystem", 00:22:58.970 "trtype": "$TEST_TRANSPORT", 00:22:58.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.970 "adrfam": "ipv4", 00:22:58.970 "trsvcid": "$NVMF_PORT", 00:22:58.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.970 "hdgst": ${hdgst:-false}, 00:22:58.970 "ddgst": ${ddgst:-false} 00:22:58.970 }, 00:22:58.970 "method": "bdev_nvme_attach_controller" 00:22:58.970 } 00:22:58.970 EOF 00:22:58.970 )") 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.970 { 00:22:58.970 "params": { 00:22:58.970 "name": "Nvme$subsystem", 00:22:58.970 "trtype": "$TEST_TRANSPORT", 00:22:58.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.970 "adrfam": "ipv4", 00:22:58.970 "trsvcid": "$NVMF_PORT", 00:22:58.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.970 "hdgst": ${hdgst:-false}, 00:22:58.970 "ddgst": ${ddgst:-false} 00:22:58.970 }, 00:22:58.970 "method": "bdev_nvme_attach_controller" 00:22:58.970 } 00:22:58.970 EOF 00:22:58.970 )") 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.970 { 00:22:58.970 "params": { 00:22:58.970 "name": "Nvme$subsystem", 00:22:58.970 "trtype": "$TEST_TRANSPORT", 00:22:58.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.970 "adrfam": "ipv4", 00:22:58.970 "trsvcid": "$NVMF_PORT", 00:22:58.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.970 "hdgst": ${hdgst:-false}, 00:22:58.970 "ddgst": ${ddgst:-false} 00:22:58.970 }, 00:22:58.970 "method": "bdev_nvme_attach_controller" 00:22:58.970 } 00:22:58.970 EOF 00:22:58.970 )") 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.970 { 00:22:58.970 "params": { 00:22:58.970 "name": "Nvme$subsystem", 00:22:58.970 "trtype": "$TEST_TRANSPORT", 00:22:58.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.970 "adrfam": "ipv4", 00:22:58.970 "trsvcid": "$NVMF_PORT", 00:22:58.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.970 "hdgst": ${hdgst:-false}, 00:22:58.970 "ddgst": ${ddgst:-false} 00:22:58.970 }, 00:22:58.970 "method": "bdev_nvme_attach_controller" 00:22:58.970 } 00:22:58.970 EOF 00:22:58.970 )") 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:58.970 [2024-07-15 16:14:34.713827] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:22:58.970 [2024-07-15 16:14:34.713880] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.970 { 00:22:58.970 "params": { 00:22:58.970 "name": "Nvme$subsystem", 00:22:58.970 "trtype": "$TEST_TRANSPORT", 00:22:58.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.970 "adrfam": "ipv4", 00:22:58.970 "trsvcid": "$NVMF_PORT", 00:22:58.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.970 "hdgst": ${hdgst:-false}, 00:22:58.970 "ddgst": ${ddgst:-false} 00:22:58.970 }, 00:22:58.970 "method": "bdev_nvme_attach_controller" 00:22:58.970 } 00:22:58.970 EOF 00:22:58.970 )") 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.970 { 00:22:58.970 "params": { 00:22:58.970 "name": "Nvme$subsystem", 00:22:58.970 "trtype": "$TEST_TRANSPORT", 00:22:58.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.970 "adrfam": "ipv4", 00:22:58.970 "trsvcid": "$NVMF_PORT", 00:22:58.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.970 "hdgst": ${hdgst:-false}, 00:22:58.970 "ddgst": ${ddgst:-false} 00:22:58.970 }, 00:22:58.970 "method": "bdev_nvme_attach_controller" 00:22:58.970 } 00:22:58.970 EOF 00:22:58.970 )") 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.970 { 00:22:58.970 "params": { 00:22:58.970 "name": "Nvme$subsystem", 00:22:58.970 "trtype": "$TEST_TRANSPORT", 00:22:58.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.970 "adrfam": "ipv4", 00:22:58.970 "trsvcid": "$NVMF_PORT", 00:22:58.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.970 "hdgst": ${hdgst:-false}, 00:22:58.970 "ddgst": ${ddgst:-false} 00:22:58.970 }, 00:22:58.970 "method": "bdev_nvme_attach_controller" 00:22:58.970 } 00:22:58.970 EOF 00:22:58.970 )") 00:22:58.970 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:58.971 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.971 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.971 { 00:22:58.971 "params": { 00:22:58.971 "name": "Nvme$subsystem", 00:22:58.971 "trtype": "$TEST_TRANSPORT", 00:22:58.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.971 "adrfam": "ipv4", 00:22:58.971 "trsvcid": "$NVMF_PORT", 00:22:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.971 "hdgst": ${hdgst:-false}, 00:22:58.971 "ddgst": ${ddgst:-false} 00:22:58.971 }, 00:22:58.971 "method": "bdev_nvme_attach_controller" 00:22:58.971 } 00:22:58.971 EOF 00:22:58.971 )") 00:22:58.971 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.971 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:58.971 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:58.971 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:58.971 16:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:58.971 "params": { 00:22:58.971 "name": "Nvme1", 00:22:58.971 "trtype": "tcp", 00:22:58.971 "traddr": "10.0.0.2", 00:22:58.971 "adrfam": "ipv4", 00:22:58.971 "trsvcid": "4420", 00:22:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:58.971 "hdgst": false, 00:22:58.971 "ddgst": false 00:22:58.971 }, 00:22:58.971 "method": "bdev_nvme_attach_controller" 00:22:58.971 },{ 00:22:58.971 "params": { 00:22:58.971 "name": "Nvme2", 00:22:58.971 "trtype": "tcp", 00:22:58.971 "traddr": "10.0.0.2", 00:22:58.971 "adrfam": "ipv4", 00:22:58.971 "trsvcid": "4420", 00:22:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:58.971 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:58.971 "hdgst": false, 00:22:58.971 "ddgst": false 00:22:58.971 }, 00:22:58.971 "method": "bdev_nvme_attach_controller" 00:22:58.971 },{ 00:22:58.971 "params": { 00:22:58.971 "name": "Nvme3", 00:22:58.971 "trtype": "tcp", 00:22:58.971 "traddr": "10.0.0.2", 00:22:58.971 "adrfam": "ipv4", 00:22:58.971 "trsvcid": "4420", 00:22:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:58.971 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:58.971 "hdgst": false, 00:22:58.971 "ddgst": false 00:22:58.971 }, 00:22:58.971 "method": "bdev_nvme_attach_controller" 00:22:58.971 },{ 00:22:58.971 "params": { 00:22:58.971 "name": "Nvme4", 00:22:58.971 "trtype": "tcp", 00:22:58.971 "traddr": "10.0.0.2", 00:22:58.971 "adrfam": "ipv4", 00:22:58.971 "trsvcid": "4420", 00:22:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:58.971 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:58.971 "hdgst": false, 00:22:58.971 "ddgst": false 00:22:58.971 }, 00:22:58.971 "method": "bdev_nvme_attach_controller" 00:22:58.971 },{ 00:22:58.971 "params": { 00:22:58.971 "name": "Nvme5", 00:22:58.971 "trtype": "tcp", 00:22:58.971 "traddr": "10.0.0.2", 00:22:58.971 "adrfam": "ipv4", 00:22:58.971 "trsvcid": "4420", 00:22:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:58.971 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:58.971 "hdgst": false, 00:22:58.971 "ddgst": false 00:22:58.971 }, 00:22:58.971 "method": "bdev_nvme_attach_controller" 00:22:58.971 },{ 00:22:58.971 "params": { 00:22:58.971 "name": "Nvme6", 00:22:58.971 "trtype": "tcp", 00:22:58.971 "traddr": "10.0.0.2", 00:22:58.971 "adrfam": "ipv4", 00:22:58.971 "trsvcid": "4420", 00:22:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:58.971 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:58.971 "hdgst": false, 00:22:58.971 "ddgst": false 00:22:58.971 }, 00:22:58.971 "method": "bdev_nvme_attach_controller" 00:22:58.971 },{ 00:22:58.971 "params": { 00:22:58.971 "name": "Nvme7", 00:22:58.971 "trtype": "tcp", 00:22:58.971 "traddr": "10.0.0.2", 00:22:58.971 "adrfam": "ipv4", 00:22:58.971 "trsvcid": "4420", 00:22:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:58.971 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:58.971 "hdgst": false, 00:22:58.971 "ddgst": false 00:22:58.971 }, 00:22:58.971 "method": "bdev_nvme_attach_controller" 00:22:58.971 },{ 00:22:58.971 "params": { 00:22:58.971 "name": "Nvme8", 00:22:58.971 "trtype": "tcp", 00:22:58.971 "traddr": "10.0.0.2", 00:22:58.971 "adrfam": "ipv4", 00:22:58.971 "trsvcid": "4420", 00:22:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:58.971 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:58.971 "hdgst": false, 00:22:58.971 "ddgst": false 00:22:58.971 }, 00:22:58.971 "method": "bdev_nvme_attach_controller" 00:22:58.971 },{ 00:22:58.971 "params": { 00:22:58.971 "name": "Nvme9", 00:22:58.971 "trtype": "tcp", 00:22:58.971 "traddr": "10.0.0.2", 00:22:58.971 "adrfam": "ipv4", 00:22:58.971 "trsvcid": "4420", 00:22:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:58.971 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:58.971 "hdgst": false, 00:22:58.971 "ddgst": false 00:22:58.971 }, 00:22:58.971 "method": "bdev_nvme_attach_controller" 00:22:58.971 },{ 00:22:58.971 "params": { 00:22:58.971 "name": "Nvme10", 00:22:58.971 "trtype": "tcp", 00:22:58.971 "traddr": "10.0.0.2", 00:22:58.971 "adrfam": "ipv4", 00:22:58.971 "trsvcid": "4420", 00:22:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:58.971 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:58.971 "hdgst": false, 00:22:58.971 "ddgst": false 00:22:58.971 }, 00:22:58.971 "method": "bdev_nvme_attach_controller" 00:22:58.971 }' 00:22:58.971 [2024-07-15 16:14:34.774036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.232 [2024-07-15 16:14:34.839455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.616 16:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.616 16:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:00.616 16:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:00.616 16:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.616 16:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.616 16:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.616 16:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2360790 00:23:00.616 16:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:00.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2360790 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:00.616 16:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:01.559 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2360409 00:23:01.559 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:01.559 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:01.559 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:01.559 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:01.559 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.559 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.559 { 00:23:01.559 "params": { 00:23:01.559 "name": "Nvme$subsystem", 00:23:01.559 "trtype": "$TEST_TRANSPORT", 00:23:01.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.559 "adrfam": "ipv4", 00:23:01.559 "trsvcid": "$NVMF_PORT", 00:23:01.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.559 "hdgst": ${hdgst:-false}, 00:23:01.559 "ddgst": ${ddgst:-false} 00:23:01.559 }, 00:23:01.559 "method": "bdev_nvme_attach_controller" 00:23:01.559 } 00:23:01.559 EOF 00:23:01.559 )") 00:23:01.559 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.559 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.559 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.559 { 00:23:01.559 "params": { 00:23:01.559 "name": "Nvme$subsystem", 00:23:01.559 "trtype": "$TEST_TRANSPORT", 00:23:01.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.559 "adrfam": "ipv4", 00:23:01.559 "trsvcid": "$NVMF_PORT", 00:23:01.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.560 "hdgst": ${hdgst:-false}, 00:23:01.560 "ddgst": ${ddgst:-false} 00:23:01.560 }, 00:23:01.560 "method": "bdev_nvme_attach_controller" 00:23:01.560 } 00:23:01.560 EOF 00:23:01.560 )") 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.560 { 00:23:01.560 "params": { 00:23:01.560 "name": "Nvme$subsystem", 00:23:01.560 "trtype": "$TEST_TRANSPORT", 00:23:01.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.560 "adrfam": "ipv4", 00:23:01.560 "trsvcid": "$NVMF_PORT", 00:23:01.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.560 "hdgst": ${hdgst:-false}, 00:23:01.560 "ddgst": ${ddgst:-false} 00:23:01.560 }, 00:23:01.560 "method": "bdev_nvme_attach_controller" 00:23:01.560 } 00:23:01.560 EOF 00:23:01.560 )") 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.560 { 00:23:01.560 "params": { 00:23:01.560 "name": "Nvme$subsystem", 00:23:01.560 "trtype": "$TEST_TRANSPORT", 00:23:01.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.560 "adrfam": "ipv4", 00:23:01.560 "trsvcid": "$NVMF_PORT", 00:23:01.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.560 "hdgst": ${hdgst:-false}, 00:23:01.560 "ddgst": ${ddgst:-false} 00:23:01.560 }, 00:23:01.560 "method": "bdev_nvme_attach_controller" 00:23:01.560 } 00:23:01.560 EOF 00:23:01.560 )") 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.560 { 00:23:01.560 "params": { 00:23:01.560 "name": "Nvme$subsystem", 00:23:01.560 "trtype": "$TEST_TRANSPORT", 00:23:01.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.560 "adrfam": "ipv4", 00:23:01.560 "trsvcid": "$NVMF_PORT", 00:23:01.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.560 "hdgst": ${hdgst:-false}, 00:23:01.560 "ddgst": ${ddgst:-false} 00:23:01.560 }, 00:23:01.560 "method": "bdev_nvme_attach_controller" 00:23:01.560 } 00:23:01.560 EOF 00:23:01.560 )") 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.560 { 00:23:01.560 "params": { 00:23:01.560 "name": "Nvme$subsystem", 00:23:01.560 "trtype": "$TEST_TRANSPORT", 00:23:01.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.560 "adrfam": "ipv4", 00:23:01.560 "trsvcid": "$NVMF_PORT", 00:23:01.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.560 "hdgst": ${hdgst:-false}, 00:23:01.560 "ddgst": ${ddgst:-false} 00:23:01.560 }, 00:23:01.560 "method": "bdev_nvme_attach_controller" 00:23:01.560 } 00:23:01.560 EOF 00:23:01.560 )") 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.560 { 00:23:01.560 "params": { 00:23:01.560 "name": "Nvme$subsystem", 00:23:01.560 "trtype": "$TEST_TRANSPORT", 00:23:01.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.560 "adrfam": "ipv4", 00:23:01.560 "trsvcid": "$NVMF_PORT", 00:23:01.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.560 "hdgst": ${hdgst:-false}, 00:23:01.560 "ddgst": ${ddgst:-false} 00:23:01.560 }, 00:23:01.560 "method": "bdev_nvme_attach_controller" 00:23:01.560 } 00:23:01.560 EOF 00:23:01.560 )") 00:23:01.560 [2024-07-15 16:14:37.286019] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:23:01.560 [2024-07-15 16:14:37.286074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361250 ] 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.560 { 00:23:01.560 "params": { 00:23:01.560 "name": "Nvme$subsystem", 00:23:01.560 "trtype": "$TEST_TRANSPORT", 00:23:01.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.560 "adrfam": "ipv4", 00:23:01.560 "trsvcid": "$NVMF_PORT", 00:23:01.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.560 "hdgst": ${hdgst:-false}, 00:23:01.560 "ddgst": ${ddgst:-false} 00:23:01.560 }, 00:23:01.560 "method": "bdev_nvme_attach_controller" 00:23:01.560 } 00:23:01.560 EOF 00:23:01.560 )") 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.560 { 00:23:01.560 "params": { 00:23:01.560 "name": "Nvme$subsystem", 00:23:01.560 "trtype": "$TEST_TRANSPORT", 00:23:01.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.560 "adrfam": "ipv4", 00:23:01.560 "trsvcid": "$NVMF_PORT", 00:23:01.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.560 "hdgst": ${hdgst:-false}, 00:23:01.560 "ddgst": ${ddgst:-false} 00:23:01.560 }, 00:23:01.560 "method": "bdev_nvme_attach_controller" 00:23:01.560 } 00:23:01.560 EOF 00:23:01.560 )") 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.560 { 00:23:01.560 "params": { 00:23:01.560 "name": "Nvme$subsystem", 00:23:01.560 "trtype": "$TEST_TRANSPORT", 00:23:01.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.560 "adrfam": "ipv4", 00:23:01.560 "trsvcid": "$NVMF_PORT", 00:23:01.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.560 "hdgst": ${hdgst:-false}, 00:23:01.560 "ddgst": ${ddgst:-false} 00:23:01.560 }, 00:23:01.560 "method": "bdev_nvme_attach_controller" 00:23:01.560 } 00:23:01.560 EOF 00:23:01.560 )") 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.560 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:01.560 16:14:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:01.560 "params": { 00:23:01.560 "name": "Nvme1", 00:23:01.560 "trtype": "tcp", 00:23:01.560 "traddr": "10.0.0.2", 00:23:01.560 "adrfam": "ipv4", 00:23:01.560 "trsvcid": "4420", 00:23:01.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.560 "hdgst": false, 00:23:01.560 "ddgst": false 00:23:01.560 }, 00:23:01.560 "method": "bdev_nvme_attach_controller" 00:23:01.560 },{ 00:23:01.560 "params": { 00:23:01.560 "name": "Nvme2", 00:23:01.560 "trtype": "tcp", 00:23:01.560 "traddr": "10.0.0.2", 00:23:01.560 "adrfam": "ipv4", 00:23:01.560 "trsvcid": "4420", 00:23:01.560 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:01.560 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:01.560 "hdgst": false, 00:23:01.560 "ddgst": false 00:23:01.560 }, 00:23:01.560 "method": "bdev_nvme_attach_controller" 00:23:01.560 },{ 00:23:01.560 "params": { 00:23:01.560 "name": "Nvme3", 00:23:01.560 "trtype": "tcp", 00:23:01.560 "traddr": "10.0.0.2", 00:23:01.560 "adrfam": "ipv4", 00:23:01.560 "trsvcid": "4420", 00:23:01.560 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:01.560 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:01.560 "hdgst": false, 00:23:01.560 "ddgst": false 00:23:01.560 }, 00:23:01.560 "method": "bdev_nvme_attach_controller" 00:23:01.560 },{ 00:23:01.561 "params": { 00:23:01.561 "name": "Nvme4", 00:23:01.561 "trtype": "tcp", 00:23:01.561 "traddr": "10.0.0.2", 00:23:01.561 "adrfam": "ipv4", 00:23:01.561 "trsvcid": "4420", 00:23:01.561 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:01.561 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:01.561 "hdgst": false, 00:23:01.561 "ddgst": false 00:23:01.561 }, 00:23:01.561 "method": "bdev_nvme_attach_controller" 00:23:01.561 },{ 00:23:01.561 "params": { 00:23:01.561 "name": "Nvme5", 00:23:01.561 "trtype": "tcp", 00:23:01.561 "traddr": "10.0.0.2", 00:23:01.561 "adrfam": "ipv4", 00:23:01.561 "trsvcid": "4420", 00:23:01.561 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:01.561 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:01.561 "hdgst": false, 00:23:01.561 "ddgst": false 00:23:01.561 }, 00:23:01.561 "method": "bdev_nvme_attach_controller" 00:23:01.561 },{ 00:23:01.561 "params": { 00:23:01.561 "name": "Nvme6", 00:23:01.561 "trtype": "tcp", 00:23:01.561 "traddr": "10.0.0.2", 00:23:01.561 "adrfam": "ipv4", 00:23:01.561 "trsvcid": "4420", 00:23:01.561 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:01.561 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:01.561 "hdgst": false, 00:23:01.561 "ddgst": false 00:23:01.561 }, 00:23:01.561 "method": "bdev_nvme_attach_controller" 00:23:01.561 },{ 00:23:01.561 "params": { 00:23:01.561 "name": "Nvme7", 00:23:01.561 "trtype": "tcp", 00:23:01.561 "traddr": "10.0.0.2", 00:23:01.561 "adrfam": "ipv4", 00:23:01.561 "trsvcid": "4420", 00:23:01.561 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:01.561 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:01.561 "hdgst": false, 00:23:01.561 "ddgst": false 00:23:01.561 }, 00:23:01.561 "method": "bdev_nvme_attach_controller" 00:23:01.561 },{ 00:23:01.561 "params": { 00:23:01.561 "name": "Nvme8", 00:23:01.561 "trtype": "tcp", 00:23:01.561 "traddr": "10.0.0.2", 00:23:01.561 "adrfam": "ipv4", 00:23:01.561 "trsvcid": "4420", 00:23:01.561 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:01.561 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:01.561 "hdgst": false, 00:23:01.561 "ddgst": false 00:23:01.561 }, 00:23:01.561 "method": "bdev_nvme_attach_controller" 00:23:01.561 },{ 00:23:01.561 "params": { 00:23:01.561 "name": "Nvme9", 00:23:01.561 "trtype": "tcp", 00:23:01.561 "traddr": "10.0.0.2", 00:23:01.561 "adrfam": "ipv4", 00:23:01.561 "trsvcid": "4420", 00:23:01.561 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:01.561 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:01.561 "hdgst": false, 00:23:01.561 "ddgst": false 00:23:01.561 }, 00:23:01.561 "method": "bdev_nvme_attach_controller" 00:23:01.561 },{ 00:23:01.561 "params": { 00:23:01.561 "name": "Nvme10", 00:23:01.561 "trtype": "tcp", 00:23:01.561 "traddr": "10.0.0.2", 00:23:01.561 "adrfam": "ipv4", 00:23:01.561 "trsvcid": "4420", 00:23:01.561 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:01.561 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:01.561 "hdgst": false, 00:23:01.561 "ddgst": false 00:23:01.561 }, 00:23:01.561 "method": "bdev_nvme_attach_controller" 00:23:01.561 }' 00:23:01.561 [2024-07-15 16:14:37.347089] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.822 [2024-07-15 16:14:37.411347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.202 Running I/O for 1 seconds... 00:23:04.586 00:23:04.586 Latency(us) 00:23:04.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.586 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.586 Verification LBA range: start 0x0 length 0x400 00:23:04.586 Nvme1n1 : 1.11 229.63 14.35 0.00 0.00 270812.80 23046.83 256901.12 00:23:04.586 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.586 Verification LBA range: start 0x0 length 0x400 00:23:04.586 Nvme2n1 : 1.12 228.28 14.27 0.00 0.00 272633.17 25449.81 251658.24 00:23:04.586 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.586 Verification LBA range: start 0x0 length 0x400 00:23:04.586 Nvme3n1 : 1.06 242.48 15.15 0.00 0.00 251391.04 11141.12 265639.25 00:23:04.586 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.586 Verification LBA range: start 0x0 length 0x400 00:23:04.586 Nvme4n1 : 1.09 234.10 14.63 0.00 0.00 255681.71 22937.60 249910.61 00:23:04.586 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.586 Verification LBA range: start 0x0 length 0x400 00:23:04.586 Nvme5n1 : 1.21 215.61 13.48 0.00 0.00 265578.10 6908.59 269134.51 00:23:04.586 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.586 Verification LBA range: start 0x0 length 0x400 00:23:04.586 Nvme6n1 : 1.16 220.97 13.81 0.00 0.00 262474.67 19551.57 262144.00 00:23:04.586 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.586 Verification LBA range: start 0x0 length 0x400 00:23:04.586 Nvme7n1 : 1.16 220.45 13.78 0.00 0.00 258343.04 22391.47 262144.00 00:23:04.586 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.586 Verification LBA range: start 0x0 length 0x400 00:23:04.586 Nvme8n1 : 1.21 264.15 16.51 0.00 0.00 212640.26 15182.51 253405.87 00:23:04.586 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.586 Verification LBA range: start 0x0 length 0x400 00:23:04.586 Nvme9n1 : 1.18 275.54 17.22 0.00 0.00 198847.48 3986.77 227191.47 00:23:04.586 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.586 Verification LBA range: start 0x0 length 0x400 00:23:04.586 Nvme10n1 : 1.23 259.54 16.22 0.00 0.00 208753.82 7591.25 290106.03 00:23:04.586 =================================================================================================================== 00:23:04.586 Total : 2390.77 149.42 0.00 0.00 242986.21 3986.77 290106.03 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:04.586 rmmod nvme_tcp 00:23:04.586 rmmod nvme_fabrics 00:23:04.586 rmmod nvme_keyring 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2360409 ']' 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2360409 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2360409 ']' 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2360409 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2360409 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2360409' 00:23:04.586 killing process with pid 2360409 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2360409 00:23:04.586 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2360409 00:23:04.847 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:04.847 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:04.847 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:04.847 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.847 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.847 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.847 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.847 16:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:07.392 00:23:07.392 real 0m16.669s 00:23:07.392 user 0m34.842s 00:23:07.392 sys 0m6.457s 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:07.392 ************************************ 00:23:07.392 END TEST nvmf_shutdown_tc1 00:23:07.392 ************************************ 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:07.392 ************************************ 00:23:07.392 START TEST nvmf_shutdown_tc2 00:23:07.392 ************************************ 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:07.392 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:07.393 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:07.393 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:07.393 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:07.393 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:07.393 16:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:07.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:23:07.393 00:23:07.393 --- 10.0.0.2 ping statistics --- 00:23:07.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.393 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:23:07.393 00:23:07.393 --- 10.0.0.1 ping statistics --- 00:23:07.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.393 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2362594 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2362594 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2362594 ']' 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.393 16:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:07.393 [2024-07-15 16:14:43.229074] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:23:07.393 [2024-07-15 16:14:43.229127] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.654 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.654 [2024-07-15 16:14:43.312318] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.654 [2024-07-15 16:14:43.366571] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.654 [2024-07-15 16:14:43.366604] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.654 [2024-07-15 16:14:43.366610] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.654 [2024-07-15 16:14:43.366614] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.654 [2024-07-15 16:14:43.366618] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.654 [2024-07-15 16:14:43.366727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.654 [2024-07-15 16:14:43.366884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.654 [2024-07-15 16:14:43.367036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.654 [2024-07-15 16:14:43.367039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:08.224 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.224 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:08.224 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:08.224 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:08.224 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:08.484 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.484 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:08.484 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.484 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:08.484 [2024-07-15 16:14:44.086446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.484 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.484 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.485 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:08.485 Malloc1 00:23:08.485 [2024-07-15 16:14:44.185239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.485 Malloc2 00:23:08.485 Malloc3 00:23:08.485 Malloc4 00:23:08.485 Malloc5 00:23:08.745 Malloc6 00:23:08.745 Malloc7 00:23:08.745 Malloc8 00:23:08.745 Malloc9 00:23:08.745 Malloc10 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2362973 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2362973 /var/tmp/bdevperf.sock 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2362973 ']' 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:08.745 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:08.745 { 00:23:08.745 "params": { 00:23:08.745 "name": "Nvme$subsystem", 00:23:08.745 "trtype": "$TEST_TRANSPORT", 00:23:08.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.745 "adrfam": "ipv4", 00:23:08.745 "trsvcid": "$NVMF_PORT", 00:23:08.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.745 "hdgst": ${hdgst:-false}, 00:23:08.745 "ddgst": ${ddgst:-false} 00:23:08.745 }, 00:23:08.745 "method": "bdev_nvme_attach_controller" 00:23:08.745 } 00:23:08.745 EOF 00:23:08.745 )") 00:23:09.005 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:09.005 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.005 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.005 { 00:23:09.005 "params": { 00:23:09.005 "name": "Nvme$subsystem", 00:23:09.005 "trtype": "$TEST_TRANSPORT", 00:23:09.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.005 "adrfam": "ipv4", 00:23:09.005 "trsvcid": "$NVMF_PORT", 00:23:09.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.005 "hdgst": ${hdgst:-false}, 00:23:09.005 "ddgst": ${ddgst:-false} 00:23:09.005 }, 00:23:09.005 "method": "bdev_nvme_attach_controller" 00:23:09.005 } 00:23:09.005 EOF 00:23:09.005 )") 00:23:09.005 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:09.005 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.005 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.005 { 00:23:09.005 "params": { 00:23:09.005 "name": "Nvme$subsystem", 00:23:09.005 "trtype": "$TEST_TRANSPORT", 00:23:09.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.005 "adrfam": "ipv4", 00:23:09.005 "trsvcid": "$NVMF_PORT", 00:23:09.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.005 "hdgst": ${hdgst:-false}, 00:23:09.005 "ddgst": ${ddgst:-false} 00:23:09.005 }, 00:23:09.005 "method": "bdev_nvme_attach_controller" 00:23:09.005 } 00:23:09.005 EOF 00:23:09.005 )") 00:23:09.005 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:09.005 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.005 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.006 { 00:23:09.006 "params": { 00:23:09.006 "name": "Nvme$subsystem", 00:23:09.006 "trtype": "$TEST_TRANSPORT", 00:23:09.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.006 "adrfam": "ipv4", 00:23:09.006 "trsvcid": "$NVMF_PORT", 00:23:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.006 "hdgst": ${hdgst:-false}, 00:23:09.006 "ddgst": ${ddgst:-false} 00:23:09.006 }, 00:23:09.006 "method": "bdev_nvme_attach_controller" 00:23:09.006 } 00:23:09.006 EOF 00:23:09.006 )") 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.006 { 00:23:09.006 "params": { 00:23:09.006 "name": "Nvme$subsystem", 00:23:09.006 "trtype": "$TEST_TRANSPORT", 00:23:09.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.006 "adrfam": "ipv4", 00:23:09.006 "trsvcid": "$NVMF_PORT", 00:23:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.006 "hdgst": ${hdgst:-false}, 00:23:09.006 "ddgst": ${ddgst:-false} 00:23:09.006 }, 00:23:09.006 "method": "bdev_nvme_attach_controller" 00:23:09.006 } 00:23:09.006 EOF 00:23:09.006 )") 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.006 { 00:23:09.006 "params": { 00:23:09.006 "name": "Nvme$subsystem", 00:23:09.006 "trtype": "$TEST_TRANSPORT", 00:23:09.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.006 "adrfam": "ipv4", 00:23:09.006 "trsvcid": "$NVMF_PORT", 00:23:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.006 "hdgst": ${hdgst:-false}, 00:23:09.006 "ddgst": ${ddgst:-false} 00:23:09.006 }, 00:23:09.006 "method": "bdev_nvme_attach_controller" 00:23:09.006 } 00:23:09.006 EOF 00:23:09.006 )") 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:09.006 [2024-07-15 16:14:44.628922] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:23:09.006 [2024-07-15 16:14:44.628976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362973 ] 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.006 { 00:23:09.006 "params": { 00:23:09.006 "name": "Nvme$subsystem", 00:23:09.006 "trtype": "$TEST_TRANSPORT", 00:23:09.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.006 "adrfam": "ipv4", 00:23:09.006 "trsvcid": "$NVMF_PORT", 00:23:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.006 "hdgst": ${hdgst:-false}, 00:23:09.006 "ddgst": ${ddgst:-false} 00:23:09.006 }, 00:23:09.006 "method": "bdev_nvme_attach_controller" 00:23:09.006 } 00:23:09.006 EOF 00:23:09.006 )") 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.006 { 00:23:09.006 "params": { 00:23:09.006 "name": "Nvme$subsystem", 00:23:09.006 "trtype": "$TEST_TRANSPORT", 00:23:09.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.006 "adrfam": "ipv4", 00:23:09.006 "trsvcid": "$NVMF_PORT", 00:23:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.006 "hdgst": ${hdgst:-false}, 00:23:09.006 "ddgst": ${ddgst:-false} 00:23:09.006 }, 00:23:09.006 "method": "bdev_nvme_attach_controller" 00:23:09.006 } 00:23:09.006 EOF 00:23:09.006 )") 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.006 { 00:23:09.006 "params": { 00:23:09.006 "name": "Nvme$subsystem", 00:23:09.006 "trtype": "$TEST_TRANSPORT", 00:23:09.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.006 "adrfam": "ipv4", 00:23:09.006 "trsvcid": "$NVMF_PORT", 00:23:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.006 "hdgst": ${hdgst:-false}, 00:23:09.006 "ddgst": ${ddgst:-false} 00:23:09.006 }, 00:23:09.006 "method": "bdev_nvme_attach_controller" 00:23:09.006 } 00:23:09.006 EOF 00:23:09.006 )") 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.006 { 00:23:09.006 "params": { 00:23:09.006 "name": "Nvme$subsystem", 00:23:09.006 "trtype": "$TEST_TRANSPORT", 00:23:09.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.006 "adrfam": "ipv4", 00:23:09.006 "trsvcid": "$NVMF_PORT", 00:23:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.006 "hdgst": ${hdgst:-false}, 00:23:09.006 "ddgst": ${ddgst:-false} 00:23:09.006 }, 00:23:09.006 "method": "bdev_nvme_attach_controller" 00:23:09.006 } 00:23:09.006 EOF 00:23:09.006 )") 00:23:09.006 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:09.006 16:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:09.006 "params": { 00:23:09.006 "name": "Nvme1", 00:23:09.006 "trtype": "tcp", 00:23:09.006 "traddr": "10.0.0.2", 00:23:09.006 "adrfam": "ipv4", 00:23:09.006 "trsvcid": "4420", 00:23:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.006 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.006 "hdgst": false, 00:23:09.006 "ddgst": false 00:23:09.006 }, 00:23:09.006 "method": "bdev_nvme_attach_controller" 00:23:09.006 },{ 00:23:09.006 "params": { 00:23:09.006 "name": "Nvme2", 00:23:09.006 "trtype": "tcp", 00:23:09.006 "traddr": "10.0.0.2", 00:23:09.006 "adrfam": "ipv4", 00:23:09.006 "trsvcid": "4420", 00:23:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:09.006 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:09.006 "hdgst": false, 00:23:09.006 "ddgst": false 00:23:09.006 }, 00:23:09.006 "method": "bdev_nvme_attach_controller" 00:23:09.006 },{ 00:23:09.006 "params": { 00:23:09.006 "name": "Nvme3", 00:23:09.006 "trtype": "tcp", 00:23:09.006 "traddr": "10.0.0.2", 00:23:09.006 "adrfam": "ipv4", 00:23:09.006 "trsvcid": "4420", 00:23:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:09.006 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:09.006 "hdgst": false, 00:23:09.006 "ddgst": false 00:23:09.006 }, 00:23:09.006 "method": "bdev_nvme_attach_controller" 00:23:09.006 },{ 00:23:09.006 "params": { 00:23:09.006 "name": "Nvme4", 00:23:09.006 "trtype": "tcp", 00:23:09.006 "traddr": "10.0.0.2", 00:23:09.006 "adrfam": "ipv4", 00:23:09.006 "trsvcid": "4420", 00:23:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:09.006 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:09.006 "hdgst": false, 00:23:09.006 "ddgst": false 00:23:09.006 }, 00:23:09.006 "method": "bdev_nvme_attach_controller" 00:23:09.006 },{ 00:23:09.006 "params": { 00:23:09.006 "name": "Nvme5", 00:23:09.006 "trtype": "tcp", 00:23:09.006 "traddr": "10.0.0.2", 00:23:09.006 "adrfam": "ipv4", 00:23:09.006 "trsvcid": "4420", 00:23:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:09.006 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:09.006 "hdgst": false, 00:23:09.006 "ddgst": false 00:23:09.006 }, 00:23:09.006 "method": "bdev_nvme_attach_controller" 00:23:09.006 },{ 00:23:09.006 "params": { 00:23:09.006 "name": "Nvme6", 00:23:09.006 "trtype": "tcp", 00:23:09.006 "traddr": "10.0.0.2", 00:23:09.006 "adrfam": "ipv4", 00:23:09.006 "trsvcid": "4420", 00:23:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:09.006 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:09.006 "hdgst": false, 00:23:09.006 "ddgst": false 00:23:09.006 }, 00:23:09.006 "method": "bdev_nvme_attach_controller" 00:23:09.006 },{ 00:23:09.006 "params": { 00:23:09.006 "name": "Nvme7", 00:23:09.006 "trtype": "tcp", 00:23:09.006 "traddr": "10.0.0.2", 00:23:09.006 "adrfam": "ipv4", 00:23:09.006 "trsvcid": "4420", 00:23:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:09.006 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:09.006 "hdgst": false, 00:23:09.006 "ddgst": false 00:23:09.006 }, 00:23:09.006 "method": "bdev_nvme_attach_controller" 00:23:09.006 },{ 00:23:09.006 "params": { 00:23:09.006 "name": "Nvme8", 00:23:09.006 "trtype": "tcp", 00:23:09.006 "traddr": "10.0.0.2", 00:23:09.006 "adrfam": "ipv4", 00:23:09.006 "trsvcid": "4420", 00:23:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:09.006 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:09.006 "hdgst": false, 00:23:09.006 "ddgst": false 00:23:09.006 }, 00:23:09.006 "method": "bdev_nvme_attach_controller" 00:23:09.006 },{ 00:23:09.006 "params": { 00:23:09.006 "name": "Nvme9", 00:23:09.006 "trtype": "tcp", 00:23:09.006 "traddr": "10.0.0.2", 00:23:09.006 "adrfam": "ipv4", 00:23:09.006 "trsvcid": "4420", 00:23:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:09.006 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:09.006 "hdgst": false, 00:23:09.006 "ddgst": false 00:23:09.006 }, 00:23:09.006 "method": "bdev_nvme_attach_controller" 00:23:09.006 },{ 00:23:09.006 "params": { 00:23:09.006 "name": "Nvme10", 00:23:09.006 "trtype": "tcp", 00:23:09.006 "traddr": "10.0.0.2", 00:23:09.006 "adrfam": "ipv4", 00:23:09.006 "trsvcid": "4420", 00:23:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:09.006 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:09.006 "hdgst": false, 00:23:09.006 "ddgst": false 00:23:09.006 }, 00:23:09.006 "method": "bdev_nvme_attach_controller" 00:23:09.006 }' 00:23:09.006 [2024-07-15 16:14:44.688898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.006 [2024-07-15 16:14:44.754099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.505 Running I/O for 10 seconds... 00:23:10.505 16:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.505 16:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:10.505 16:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:10.505 16:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.505 16:14:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.505 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.505 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:10.505 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:10.505 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:10.505 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:10.505 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:10.505 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:10.505 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:10.505 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:10.505 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:10.505 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.505 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.505 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.505 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:10.505 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:10.505 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:10.766 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:10.766 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:10.766 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:10.766 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:10.766 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.766 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.766 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.766 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:10.766 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:10.766 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2362973 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2362973 ']' 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2362973 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2362973 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2362973' 00:23:11.026 killing process with pid 2362973 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2362973 00:23:11.026 16:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2362973 00:23:11.285 Received shutdown signal, test time was about 0.948718 seconds 00:23:11.285 00:23:11.285 Latency(us) 00:23:11.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.285 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.285 Verification LBA range: start 0x0 length 0x400 00:23:11.285 Nvme1n1 : 0.92 208.68 13.04 0.00 0.00 302991.64 23265.28 246415.36 00:23:11.285 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.285 Verification LBA range: start 0x0 length 0x400 00:23:11.285 Nvme2n1 : 0.93 274.88 17.18 0.00 0.00 225171.20 15837.87 246415.36 00:23:11.285 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.285 Verification LBA range: start 0x0 length 0x400 00:23:11.285 Nvme3n1 : 0.91 215.80 13.49 0.00 0.00 279263.17 2170.88 248162.99 00:23:11.285 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.285 Verification LBA range: start 0x0 length 0x400 00:23:11.285 Nvme4n1 : 0.94 272.25 17.02 0.00 0.00 218110.72 20643.84 241172.48 00:23:11.285 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.285 Verification LBA range: start 0x0 length 0x400 00:23:11.285 Nvme5n1 : 0.93 284.61 17.79 0.00 0.00 203326.99 2129.92 225443.84 00:23:11.285 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.285 Verification LBA range: start 0x0 length 0x400 00:23:11.285 Nvme6n1 : 0.95 270.81 16.93 0.00 0.00 209852.80 22063.79 241172.48 00:23:11.285 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.285 Verification LBA range: start 0x0 length 0x400 00:23:11.286 Nvme7n1 : 0.95 270.09 16.88 0.00 0.00 205711.36 18568.53 246415.36 00:23:11.286 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.286 Verification LBA range: start 0x0 length 0x400 00:23:11.286 Nvme8n1 : 0.92 209.45 13.09 0.00 0.00 257811.63 22391.47 221948.59 00:23:11.286 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.286 Verification LBA range: start 0x0 length 0x400 00:23:11.286 Nvme9n1 : 0.94 204.84 12.80 0.00 0.00 258304.85 23374.51 291853.65 00:23:11.286 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.286 Verification LBA range: start 0x0 length 0x400 00:23:11.286 Nvme10n1 : 0.92 208.01 13.00 0.00 0.00 247107.13 22828.37 246415.36 00:23:11.286 =================================================================================================================== 00:23:11.286 Total : 2419.42 151.21 0.00 0.00 236693.78 2129.92 291853.65 00:23:11.286 16:14:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:12.226 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2362594 00:23:12.226 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:12.226 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:12.226 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:12.226 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:12.226 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:12.226 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:12.226 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:12.227 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:12.227 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:12.227 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:12.227 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:12.227 rmmod nvme_tcp 00:23:12.487 rmmod nvme_fabrics 00:23:12.487 rmmod nvme_keyring 00:23:12.487 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:12.487 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:12.487 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:12.487 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2362594 ']' 00:23:12.487 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2362594 00:23:12.487 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2362594 ']' 00:23:12.487 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2362594 00:23:12.487 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:12.487 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:12.487 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2362594 00:23:12.487 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:12.487 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:12.487 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2362594' 00:23:12.487 killing process with pid 2362594 00:23:12.487 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2362594 00:23:12.487 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2362594 00:23:12.749 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:12.749 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:12.749 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:12.749 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:12.750 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:12.750 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.750 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:12.750 16:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.664 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:14.664 00:23:14.664 real 0m7.676s 00:23:14.664 user 0m22.665s 00:23:14.664 sys 0m1.239s 00:23:14.664 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:14.664 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:14.664 ************************************ 00:23:14.664 END TEST nvmf_shutdown_tc2 00:23:14.664 ************************************ 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:14.926 ************************************ 00:23:14.926 START TEST nvmf_shutdown_tc3 00:23:14.926 ************************************ 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:14.926 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:14.926 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.926 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:14.927 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:14.927 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:14.927 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:15.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:23:15.189 00:23:15.189 --- 10.0.0.2 ping statistics --- 00:23:15.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.189 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.389 ms 00:23:15.189 00:23:15.189 --- 10.0.0.1 ping statistics --- 00:23:15.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.189 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2364150 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2364150 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2364150 ']' 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:15.189 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.190 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:15.190 16:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:15.190 [2024-07-15 16:14:50.999840] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:23:15.190 [2024-07-15 16:14:50.999905] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.451 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.451 [2024-07-15 16:14:51.084974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:15.451 [2024-07-15 16:14:51.147145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.451 [2024-07-15 16:14:51.147180] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.451 [2024-07-15 16:14:51.147185] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.451 [2024-07-15 16:14:51.147190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.451 [2024-07-15 16:14:51.147193] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.451 [2024-07-15 16:14:51.147335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.451 [2024-07-15 16:14:51.147494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:15.451 [2024-07-15 16:14:51.147648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.451 [2024-07-15 16:14:51.147651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.022 [2024-07-15 16:14:51.820837] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:16.022 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:16.282 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:16.282 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:16.282 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:16.282 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:16.282 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:16.282 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:16.282 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:16.282 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:16.282 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:16.282 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.282 16:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.282 Malloc1 00:23:16.282 [2024-07-15 16:14:51.919578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.282 Malloc2 00:23:16.282 Malloc3 00:23:16.282 Malloc4 00:23:16.282 Malloc5 00:23:16.282 Malloc6 00:23:16.543 Malloc7 00:23:16.543 Malloc8 00:23:16.543 Malloc9 00:23:16.543 Malloc10 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2364498 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2364498 /var/tmp/bdevperf.sock 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2364498 ']' 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:16.543 { 00:23:16.543 "params": { 00:23:16.543 "name": "Nvme$subsystem", 00:23:16.543 "trtype": "$TEST_TRANSPORT", 00:23:16.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.543 "adrfam": "ipv4", 00:23:16.543 "trsvcid": "$NVMF_PORT", 00:23:16.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.543 "hdgst": ${hdgst:-false}, 00:23:16.543 "ddgst": ${ddgst:-false} 00:23:16.543 }, 00:23:16.543 "method": "bdev_nvme_attach_controller" 00:23:16.543 } 00:23:16.543 EOF 00:23:16.543 )") 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:16.543 { 00:23:16.543 "params": { 00:23:16.543 "name": "Nvme$subsystem", 00:23:16.543 "trtype": "$TEST_TRANSPORT", 00:23:16.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.543 "adrfam": "ipv4", 00:23:16.543 "trsvcid": "$NVMF_PORT", 00:23:16.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.543 "hdgst": ${hdgst:-false}, 00:23:16.543 "ddgst": ${ddgst:-false} 00:23:16.543 }, 00:23:16.543 "method": "bdev_nvme_attach_controller" 00:23:16.543 } 00:23:16.543 EOF 00:23:16.543 )") 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:16.543 { 00:23:16.543 "params": { 00:23:16.543 "name": "Nvme$subsystem", 00:23:16.543 "trtype": "$TEST_TRANSPORT", 00:23:16.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.543 "adrfam": "ipv4", 00:23:16.543 "trsvcid": "$NVMF_PORT", 00:23:16.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.543 "hdgst": ${hdgst:-false}, 00:23:16.543 "ddgst": ${ddgst:-false} 00:23:16.543 }, 00:23:16.543 "method": "bdev_nvme_attach_controller" 00:23:16.543 } 00:23:16.543 EOF 00:23:16.543 )") 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:16.543 { 00:23:16.543 "params": { 00:23:16.543 "name": "Nvme$subsystem", 00:23:16.543 "trtype": "$TEST_TRANSPORT", 00:23:16.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.543 "adrfam": "ipv4", 00:23:16.543 "trsvcid": "$NVMF_PORT", 00:23:16.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.543 "hdgst": ${hdgst:-false}, 00:23:16.543 "ddgst": ${ddgst:-false} 00:23:16.543 }, 00:23:16.543 "method": "bdev_nvme_attach_controller" 00:23:16.543 } 00:23:16.543 EOF 00:23:16.543 )") 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:16.543 { 00:23:16.543 "params": { 00:23:16.543 "name": "Nvme$subsystem", 00:23:16.543 "trtype": "$TEST_TRANSPORT", 00:23:16.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.543 "adrfam": "ipv4", 00:23:16.543 "trsvcid": "$NVMF_PORT", 00:23:16.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.543 "hdgst": ${hdgst:-false}, 00:23:16.543 "ddgst": ${ddgst:-false} 00:23:16.543 }, 00:23:16.543 "method": "bdev_nvme_attach_controller" 00:23:16.543 } 00:23:16.543 EOF 00:23:16.543 )") 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:16.543 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:16.543 { 00:23:16.543 "params": { 00:23:16.543 "name": "Nvme$subsystem", 00:23:16.543 "trtype": "$TEST_TRANSPORT", 00:23:16.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.543 "adrfam": "ipv4", 00:23:16.543 "trsvcid": "$NVMF_PORT", 00:23:16.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.543 "hdgst": ${hdgst:-false}, 00:23:16.543 "ddgst": ${ddgst:-false} 00:23:16.543 }, 00:23:16.543 "method": "bdev_nvme_attach_controller" 00:23:16.543 } 00:23:16.543 EOF 00:23:16.543 )") 00:23:16.544 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:16.544 [2024-07-15 16:14:52.357212] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:23:16.544 [2024-07-15 16:14:52.357265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364498 ] 00:23:16.544 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:16.544 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:16.544 { 00:23:16.544 "params": { 00:23:16.544 "name": "Nvme$subsystem", 00:23:16.544 "trtype": "$TEST_TRANSPORT", 00:23:16.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.544 "adrfam": "ipv4", 00:23:16.544 "trsvcid": "$NVMF_PORT", 00:23:16.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.544 "hdgst": ${hdgst:-false}, 00:23:16.544 "ddgst": ${ddgst:-false} 00:23:16.544 }, 00:23:16.544 "method": "bdev_nvme_attach_controller" 00:23:16.544 } 00:23:16.544 EOF 00:23:16.544 )") 00:23:16.544 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:16.544 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:16.544 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:16.544 { 00:23:16.544 "params": { 00:23:16.544 "name": "Nvme$subsystem", 00:23:16.544 "trtype": "$TEST_TRANSPORT", 00:23:16.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.544 "adrfam": "ipv4", 00:23:16.544 "trsvcid": "$NVMF_PORT", 00:23:16.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.544 "hdgst": ${hdgst:-false}, 00:23:16.544 "ddgst": ${ddgst:-false} 00:23:16.544 }, 00:23:16.544 "method": "bdev_nvme_attach_controller" 00:23:16.544 } 00:23:16.544 EOF 00:23:16.544 )") 00:23:16.544 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:16.544 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:16.544 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:16.544 { 00:23:16.544 "params": { 00:23:16.544 "name": "Nvme$subsystem", 00:23:16.544 "trtype": "$TEST_TRANSPORT", 00:23:16.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.544 "adrfam": "ipv4", 00:23:16.544 "trsvcid": "$NVMF_PORT", 00:23:16.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.544 "hdgst": ${hdgst:-false}, 00:23:16.544 "ddgst": ${ddgst:-false} 00:23:16.544 }, 00:23:16.544 "method": "bdev_nvme_attach_controller" 00:23:16.544 } 00:23:16.544 EOF 00:23:16.544 )") 00:23:16.544 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:16.544 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.544 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:16.544 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:16.544 { 00:23:16.544 "params": { 00:23:16.544 "name": "Nvme$subsystem", 00:23:16.544 "trtype": "$TEST_TRANSPORT", 00:23:16.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.544 "adrfam": "ipv4", 00:23:16.544 "trsvcid": "$NVMF_PORT", 00:23:16.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.544 "hdgst": ${hdgst:-false}, 00:23:16.544 "ddgst": ${ddgst:-false} 00:23:16.544 }, 00:23:16.544 "method": "bdev_nvme_attach_controller" 00:23:16.544 } 00:23:16.544 EOF 00:23:16.544 )") 00:23:16.805 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:16.805 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:16.805 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:16.805 16:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:16.805 "params": { 00:23:16.805 "name": "Nvme1", 00:23:16.805 "trtype": "tcp", 00:23:16.805 "traddr": "10.0.0.2", 00:23:16.805 "adrfam": "ipv4", 00:23:16.805 "trsvcid": "4420", 00:23:16.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.805 "hdgst": false, 00:23:16.805 "ddgst": false 00:23:16.805 }, 00:23:16.805 "method": "bdev_nvme_attach_controller" 00:23:16.805 },{ 00:23:16.805 "params": { 00:23:16.805 "name": "Nvme2", 00:23:16.805 "trtype": "tcp", 00:23:16.805 "traddr": "10.0.0.2", 00:23:16.805 "adrfam": "ipv4", 00:23:16.805 "trsvcid": "4420", 00:23:16.805 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:16.805 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:16.805 "hdgst": false, 00:23:16.805 "ddgst": false 00:23:16.805 }, 00:23:16.805 "method": "bdev_nvme_attach_controller" 00:23:16.805 },{ 00:23:16.805 "params": { 00:23:16.805 "name": "Nvme3", 00:23:16.805 "trtype": "tcp", 00:23:16.805 "traddr": "10.0.0.2", 00:23:16.805 "adrfam": "ipv4", 00:23:16.805 "trsvcid": "4420", 00:23:16.805 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:16.805 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:16.805 "hdgst": false, 00:23:16.805 "ddgst": false 00:23:16.805 }, 00:23:16.805 "method": "bdev_nvme_attach_controller" 00:23:16.805 },{ 00:23:16.805 "params": { 00:23:16.805 "name": "Nvme4", 00:23:16.805 "trtype": "tcp", 00:23:16.805 "traddr": "10.0.0.2", 00:23:16.805 "adrfam": "ipv4", 00:23:16.805 "trsvcid": "4420", 00:23:16.805 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:16.805 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:16.805 "hdgst": false, 00:23:16.805 "ddgst": false 00:23:16.805 }, 00:23:16.805 "method": "bdev_nvme_attach_controller" 00:23:16.805 },{ 00:23:16.805 "params": { 00:23:16.805 "name": "Nvme5", 00:23:16.805 "trtype": "tcp", 00:23:16.805 "traddr": "10.0.0.2", 00:23:16.805 "adrfam": "ipv4", 00:23:16.805 "trsvcid": "4420", 00:23:16.805 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:16.805 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:16.805 "hdgst": false, 00:23:16.805 "ddgst": false 00:23:16.805 }, 00:23:16.805 "method": "bdev_nvme_attach_controller" 00:23:16.805 },{ 00:23:16.805 "params": { 00:23:16.805 "name": "Nvme6", 00:23:16.805 "trtype": "tcp", 00:23:16.805 "traddr": "10.0.0.2", 00:23:16.805 "adrfam": "ipv4", 00:23:16.805 "trsvcid": "4420", 00:23:16.805 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:16.805 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:16.805 "hdgst": false, 00:23:16.805 "ddgst": false 00:23:16.805 }, 00:23:16.805 "method": "bdev_nvme_attach_controller" 00:23:16.805 },{ 00:23:16.805 "params": { 00:23:16.805 "name": "Nvme7", 00:23:16.805 "trtype": "tcp", 00:23:16.805 "traddr": "10.0.0.2", 00:23:16.805 "adrfam": "ipv4", 00:23:16.805 "trsvcid": "4420", 00:23:16.805 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:16.805 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:16.805 "hdgst": false, 00:23:16.805 "ddgst": false 00:23:16.805 }, 00:23:16.805 "method": "bdev_nvme_attach_controller" 00:23:16.805 },{ 00:23:16.805 "params": { 00:23:16.805 "name": "Nvme8", 00:23:16.805 "trtype": "tcp", 00:23:16.805 "traddr": "10.0.0.2", 00:23:16.805 "adrfam": "ipv4", 00:23:16.805 "trsvcid": "4420", 00:23:16.805 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:16.805 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:16.805 "hdgst": false, 00:23:16.805 "ddgst": false 00:23:16.805 }, 00:23:16.805 "method": "bdev_nvme_attach_controller" 00:23:16.805 },{ 00:23:16.805 "params": { 00:23:16.805 "name": "Nvme9", 00:23:16.805 "trtype": "tcp", 00:23:16.805 "traddr": "10.0.0.2", 00:23:16.805 "adrfam": "ipv4", 00:23:16.805 "trsvcid": "4420", 00:23:16.805 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:16.805 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:16.805 "hdgst": false, 00:23:16.805 "ddgst": false 00:23:16.805 }, 00:23:16.805 "method": "bdev_nvme_attach_controller" 00:23:16.805 },{ 00:23:16.805 "params": { 00:23:16.805 "name": "Nvme10", 00:23:16.805 "trtype": "tcp", 00:23:16.805 "traddr": "10.0.0.2", 00:23:16.805 "adrfam": "ipv4", 00:23:16.805 "trsvcid": "4420", 00:23:16.805 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:16.805 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:16.805 "hdgst": false, 00:23:16.805 "ddgst": false 00:23:16.805 }, 00:23:16.805 "method": "bdev_nvme_attach_controller" 00:23:16.805 }' 00:23:16.805 [2024-07-15 16:14:52.416996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.805 [2024-07-15 16:14:52.481925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.189 Running I/O for 10 seconds... 00:23:18.189 16:14:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.189 16:14:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:18.189 16:14:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:18.189 16:14:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.189 16:14:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.450 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.450 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:18.450 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:18.450 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:18.450 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:18.450 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:18.450 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:18.450 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:18.450 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:18.450 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:18.450 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:18.450 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.450 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.450 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.450 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:18.450 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:18.450 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:18.710 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:18.711 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:18.711 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:18.711 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:18.711 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.711 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.711 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.711 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:18.711 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:18.711 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2364150 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2364150 ']' 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2364150 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2364150 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2364150' 00:23:18.977 killing process with pid 2364150 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2364150 00:23:18.977 16:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2364150 00:23:18.977 [2024-07-15 16:14:54.792087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.977 [2024-07-15 16:14:54.792311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.792420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71da0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.795998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.796002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.796007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.796012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.796016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.796021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.796025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.796029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.796034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b4e0 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.796708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.796730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.978 [2024-07-15 16:14:54.796735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.796998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.797003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.797007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.797012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b980 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.797672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.979 [2024-07-15 16:14:54.797707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.979 [2024-07-15 16:14:54.797718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.979 [2024-07-15 16:14:54.797726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.979 [2024-07-15 16:14:54.797735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.979 [2024-07-15 16:14:54.797742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.979 [2024-07-15 16:14:54.797751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.979 [2024-07-15 16:14:54.797758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.979 [2024-07-15 16:14:54.797765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c2ca0 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.797882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.979 [2024-07-15 16:14:54.797893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.979 [2024-07-15 16:14:54.797901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.979 [2024-07-15 16:14:54.797908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.979 [2024-07-15 16:14:54.797916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.979 [2024-07-15 16:14:54.797923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.979 [2024-07-15 16:14:54.797932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.979 [2024-07-15 16:14:54.797939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.979 [2024-07-15 16:14:54.797946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16470c0 is same with the state(5) to be set 00:23:18.979 [2024-07-15 16:14:54.797973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.979 [2024-07-15 16:14:54.797981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.979 [2024-07-15 16:14:54.797989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.979 [2024-07-15 16:14:54.797996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.979 [2024-07-15 16:14:54.798004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.979 [2024-07-15 16:14:54.798011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.979 [2024-07-15 16:14:54.798019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.979 [2024-07-15 16:14:54.798025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with t[2024-07-15 16:14:54.798030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:23:18.979 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.980 [2024-07-15 16:14:54.798041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with t[2024-07-15 16:14:54.798040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14865d0 is same he state(5) to be set 00:23:18.980 with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.980 [2024-07-15 16:14:54.798068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.980 [2024-07-15 16:14:54.798079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-15 16:14:54.798084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with tid:0 cdw10:00000000 cdw11:00000000 00:23:18.980 he state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.980 [2024-07-15 16:14:54.798096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-15 16:14:54.798101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with tid:0 cdw10:00000000 cdw11:00000000 00:23:18.980 he state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.980 [2024-07-15 16:14:54.798114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-15 16:14:54.798119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with tid:0 cdw10:00000000 cdw11:00000000 00:23:18.980 he state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.980 [2024-07-15 16:14:54.798137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165a990 is same [2024-07-15 16:14:54.798144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with twith the state(5) to be set 00:23:18.980 he state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.980 [2024-07-15 16:14:54.798184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with t[2024-07-15 16:14:54.798189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:23:18.980 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.980 [2024-07-15 16:14:54.798196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-15 16:14:54.798201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with tid:0 cdw10:00000000 cdw11:00000000 00:23:18.980 he state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.980 [2024-07-15 16:14:54.798212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-15 16:14:54.798218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with tid:0 cdw10:00000000 cdw11:00000000 00:23:18.980 he state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.980 [2024-07-15 16:14:54.798230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with t[2024-07-15 16:14:54.798234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(5) to be set 00:23:18.980 id:0 cdw10:00000000 cdw11:00000000 00:23:18.980 [2024-07-15 16:14:54.798244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.980 [2024-07-15 16:14:54.798249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652210 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-15 16:14:54.798279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with tid:0 cdw10:00000000 cdw11:00000000 00:23:18.980 he state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.980 [2024-07-15 16:14:54.798292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-15 16:14:54.798297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with tid:0 cdw10:00000000 cdw11:00000000 00:23:18.980 he state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.980 [2024-07-15 16:14:54.798309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-15 16:14:54.798314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with tid:0 cdw10:00000000 cdw11:00000000 00:23:18.980 he state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.980 [2024-07-15 16:14:54.798327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.980 [2024-07-15 16:14:54.798333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with t[2024-07-15 16:14:54.798339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:23:18.980 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.980 [2024-07-15 16:14:54.798348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c3030 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.980 [2024-07-15 16:14:54.798372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.798377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.798381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.798386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83be40 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.981 [2024-07-15 16:14:54.799432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:14:54.799442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.981 he state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with t[2024-07-15 16:14:54.799458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:12he state(5) to be set 00:23:18.981 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.981 [2024-07-15 16:14:54.799466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:14:54.799471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.981 he state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:12[2024-07-15 16:14:54.799485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.981 he state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.981 [2024-07-15 16:14:54.799498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.981 [2024-07-15 16:14:54.799508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:14:54.799513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.981 he state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:12[2024-07-15 16:14:54.799524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.981 he state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.981 [2024-07-15 16:14:54.799535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.981 [2024-07-15 16:14:54.799546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.981 [2024-07-15 16:14:54.799557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.981 [2024-07-15 16:14:54.799562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.982 [2024-07-15 16:14:54.799562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.982 [2024-07-15 16:14:54.799572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.982 [2024-07-15 16:14:54.799572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.982 [2024-07-15 16:14:54.799582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.982 [2024-07-15 16:14:54.799583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.982 [2024-07-15 16:14:54.799592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with t[2024-07-15 16:14:54.799592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:23:18.982 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c300 is same with the state(5) to be set 00:23:18.982 [2024-07-15 16:14:54.799603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.799988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.799995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.800004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.800012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.800021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.800029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.800038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.800045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.800055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.800062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.800071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.800079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.800088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.800096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.800105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.800113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.800127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.800135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.800145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.800152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.800167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.800175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.800184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.800191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.800201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.800208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.982 [2024-07-15 16:14:54.800217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.982 [2024-07-15 16:14:54.800225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.983 [2024-07-15 16:14:54.800234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.983 [2024-07-15 16:14:54.800242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.983 [2024-07-15 16:14:54.800238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.983 [2024-07-15 16:14:54.800254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.983 [2024-07-15 16:14:54.800265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with t[2024-07-15 16:14:54.800270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1he state(5) to be set 00:23:18.983 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.983 [2024-07-15 16:14:54.800278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.983 [2024-07-15 16:14:54.800283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.983 [2024-07-15 16:14:54.800293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.983 [2024-07-15 16:14:54.800304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.983 [2024-07-15 16:14:54.800315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:14:54.800320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.983 he state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.983 [2024-07-15 16:14:54.800334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:14:54.800339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.983 he state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-07-15 16:14:54.800352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.983 he state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.983 [2024-07-15 16:14:54.800365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with t[2024-07-15 16:14:54.800370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1he state(5) to be set 00:23:18.983 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.983 [2024-07-15 16:14:54.800378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.983 [2024-07-15 16:14:54.800383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.983 [2024-07-15 16:14:54.800393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with t[2024-07-15 16:14:54.800398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:23:18.983 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.983 [2024-07-15 16:14:54.800408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with t[2024-07-15 16:14:54.800413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1he state(5) to be set 00:23:18.983 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.983 [2024-07-15 16:14:54.800422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.983 [2024-07-15 16:14:54.800427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.983 [2024-07-15 16:14:54.800438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.983 [2024-07-15 16:14:54.800443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.983 [2024-07-15 16:14:54.800453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.983 [2024-07-15 16:14:54.800464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.983 [2024-07-15 16:14:54.800474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.983 [2024-07-15 16:14:54.800479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.983 [2024-07-15 16:14:54.800489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:14:54.800496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.983 he state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1[2024-07-15 16:14:54.800509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.983 he state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.983 [2024-07-15 16:14:54.800521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.983 [2024-07-15 16:14:54.800532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.983 [2024-07-15 16:14:54.800537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.983 [2024-07-15 16:14:54.800545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.800553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.800553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.800563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with t[2024-07-15 16:14:54.800562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1he state(5) to be set 00:23:18.984 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.800572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.800580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.800583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.800590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:14:54.800591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 he state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.800599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.800604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71440 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.800616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:18.984 [2024-07-15 16:14:54.800659] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15c14e0 was disconnected and freed. reset controller. 00:23:18.984 [2024-07-15 16:14:54.800695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.800990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.800997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.801007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.801014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.801024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.801031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.801040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.801049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.801051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.801059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.801066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.801069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.801072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.801078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.801079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.801083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.801086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:14:54.801088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 he state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.801095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.801097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.801100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.801105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.801106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.801113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.801115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.801117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.801127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.801127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.801132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.801137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.801137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.984 [2024-07-15 16:14:54.801142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.801145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.984 [2024-07-15 16:14:54.801147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.984 [2024-07-15 16:14:54.801153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.985 [2024-07-15 16:14:54.801158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.985 [2024-07-15 16:14:54.801166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.985 [2024-07-15 16:14:54.801177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with t[2024-07-15 16:14:54.801181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:23:18.985 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.985 [2024-07-15 16:14:54.801189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.985 [2024-07-15 16:14:54.801198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:14:54.801204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.985 he state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.985 [2024-07-15 16:14:54.801216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.985 [2024-07-15 16:14:54.801226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.985 [2024-07-15 16:14:54.801236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.985 [2024-07-15 16:14:54.801241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:1[2024-07-15 16:14:54.801252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.985 he state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.985 [2024-07-15 16:14:54.801265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.985 [2024-07-15 16:14:54.801275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:14:54.801280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.985 he state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1[2024-07-15 16:14:54.801291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.985 he state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.985 [2024-07-15 16:14:54.801305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with t[2024-07-15 16:14:54.801310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:1he state(5) to be set 00:23:18.985 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.985 [2024-07-15 16:14:54.801317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.985 [2024-07-15 16:14:54.801322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.985 [2024-07-15 16:14:54.801332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:14:54.801338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.985 he state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.985 [2024-07-15 16:14:54.801352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:14:54.801358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.985 he state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.985 [2024-07-15 16:14:54.801370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.985 [2024-07-15 16:14:54.801411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.985 [2024-07-15 16:14:54.801508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.985 [2024-07-15 16:14:54.801604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.985 [2024-07-15 16:14:54.801701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.985 [2024-07-15 16:14:54.801798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.985 [2024-07-15 16:14:54.801854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.801898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.986 [2024-07-15 16:14:54.801948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.802257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.986 [2024-07-15 16:14:54.802322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.802372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa718e0 is same with the state(5) to be set 00:23:18.986 [2024-07-15 16:14:54.802425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.802538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.802589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.802644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.802695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.802751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.802798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.802851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.802900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.802959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.803007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.803060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.803107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.803170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.803218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.803271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.803318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.803369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.803418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.803470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.803522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.803575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.803623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.803675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.803727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.803779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.803825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.803875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.803923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.803976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.804027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.804079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.804130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.804205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.804253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.804308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.804354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.804408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.804455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.804507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.804552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.804607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.804655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.804705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.804752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.986 [2024-07-15 16:14:54.804837] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15c2970 was disconnected and freed. reset controller. 00:23:18.986 [2024-07-15 16:14:54.805206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.986 [2024-07-15 16:14:54.805225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 16:14:54.819186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 16:14:54.819227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 16:14:54.819241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 16:14:54.819251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 16:14:54.819263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 16:14:54.819278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 16:14:54.819289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 16:14:54.819297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 16:14:54.819306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 16:14:54.819314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 16:14:54.819324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 16:14:54.819331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 16:14:54.819341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 16:14:54.819349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 16:14:54.819359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 16:14:54.819367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 16:14:54.819376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 16:14:54.819384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 16:14:54.819393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 16:14:54.819401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 16:14:54.819411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 16:14:54.819418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 16:14:54.819428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 16:14:54.819436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 16:14:54.819446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 16:14:54.819453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 16:14:54.819463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 16:14:54.819470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 16:14:54.819480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 16:14:54.819487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 16:14:54.819499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.259 [2024-07-15 16:14:54.819508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.259 [2024-07-15 16:14:54.819517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.819983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.819990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.820000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.820007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.820016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.820023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.820033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.820041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.820050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.820057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.820067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.820075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.820085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.820092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.820101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.820109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.820118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.820131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.820141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.820147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.820159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.820166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.820175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.820183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.820192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.820200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.820210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.820217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.260 [2024-07-15 16:14:54.820226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.260 [2024-07-15 16:14:54.820234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.820243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.261 [2024-07-15 16:14:54.820251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.820260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.261 [2024-07-15 16:14:54.820267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.820276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.261 [2024-07-15 16:14:54.820284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.820293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.261 [2024-07-15 16:14:54.820301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.820310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.261 [2024-07-15 16:14:54.820317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.820392] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x152a4b0 was disconnected and freed. reset controller. 00:23:19.261 [2024-07-15 16:14:54.823076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:19.261 [2024-07-15 16:14:54.823106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:19.261 [2024-07-15 16:14:54.823128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165a990 (9): Bad file descriptor 00:23:19.261 [2024-07-15 16:14:54.823142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1652210 (9): Bad file descriptor 00:23:19.261 [2024-07-15 16:14:54.823159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c2ca0 (9): Bad file descriptor 00:23:19.261 [2024-07-15 16:14:54.823198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 16:14:54.823209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.823219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 16:14:54.823226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.823234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 16:14:54.823241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.823250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 16:14:54.823257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.823265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648290 is same with the state(5) to be set 00:23:19.261 [2024-07-15 16:14:54.823283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16470c0 (9): Bad file descriptor 00:23:19.261 [2024-07-15 16:14:54.823302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14865d0 (9): Bad file descriptor 00:23:19.261 [2024-07-15 16:14:54.823326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 16:14:54.823335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.823343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 16:14:54.823350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.823358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 16:14:54.823365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.823374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 16:14:54.823382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.823389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd8340 is same with the state(5) to be set 00:23:19.261 [2024-07-15 16:14:54.823414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 16:14:54.823423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.823432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 16:14:54.823439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.823448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 16:14:54.823457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.823466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 16:14:54.823473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.823481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623e90 is same with the state(5) to be set 00:23:19.261 [2024-07-15 16:14:54.823500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c3030 (9): Bad file descriptor 00:23:19.261 [2024-07-15 16:14:54.823526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 16:14:54.823535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.823544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 16:14:54.823551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.823560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 16:14:54.823567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.823576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.261 [2024-07-15 16:14:54.823584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.261 [2024-07-15 16:14:54.823591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a91b0 is same with the state(5) to be set 00:23:19.261 [2024-07-15 16:14:54.825146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:19.261 [2024-07-15 16:14:54.825750] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:19.261 [2024-07-15 16:14:54.825799] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:19.261 [2024-07-15 16:14:54.825998] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:19.261 [2024-07-15 16:14:54.826587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.261 [2024-07-15 16:14:54.826628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1652210 with addr=10.0.0.2, port=4420 00:23:19.262 [2024-07-15 16:14:54.826642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652210 is same with the state(5) to be set 00:23:19.262 [2024-07-15 16:14:54.827077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.262 [2024-07-15 16:14:54.827089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a990 with addr=10.0.0.2, port=4420 00:23:19.262 [2024-07-15 16:14:54.827097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165a990 is same with the state(5) to be set 00:23:19.262 [2024-07-15 16:14:54.827593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.262 [2024-07-15 16:14:54.827631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14865d0 with addr=10.0.0.2, port=4420 00:23:19.262 [2024-07-15 16:14:54.827642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14865d0 is same with the state(5) to be set 00:23:19.262 [2024-07-15 16:14:54.828006] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:19.262 [2024-07-15 16:14:54.828050] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:19.262 [2024-07-15 16:14:54.828098] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:19.262 [2024-07-15 16:14:54.828142] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:19.262 [2024-07-15 16:14:54.828168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1652210 (9): Bad file descriptor 00:23:19.262 [2024-07-15 16:14:54.828181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165a990 (9): Bad file descriptor 00:23:19.262 [2024-07-15 16:14:54.828191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14865d0 (9): Bad file descriptor 00:23:19.262 [2024-07-15 16:14:54.828283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:19.262 [2024-07-15 16:14:54.828295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:19.262 [2024-07-15 16:14:54.828304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:19.262 [2024-07-15 16:14:54.828319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:19.262 [2024-07-15 16:14:54.828327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:19.262 [2024-07-15 16:14:54.828334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:19.262 [2024-07-15 16:14:54.828346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:19.262 [2024-07-15 16:14:54.828353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:19.262 [2024-07-15 16:14:54.828360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:19.262 [2024-07-15 16:14:54.828407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.262 [2024-07-15 16:14:54.828416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.262 [2024-07-15 16:14:54.828423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.262 [2024-07-15 16:14:54.833119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1648290 (9): Bad file descriptor 00:23:19.262 [2024-07-15 16:14:54.833152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd8340 (9): Bad file descriptor 00:23:19.262 [2024-07-15 16:14:54.833169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1623e90 (9): Bad file descriptor 00:23:19.262 [2024-07-15 16:14:54.833191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a91b0 (9): Bad file descriptor 00:23:19.262 [2024-07-15 16:14:54.833298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.262 [2024-07-15 16:14:54.833310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.262 [2024-07-15 16:14:54.833327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.262 [2024-07-15 16:14:54.833336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.262 [2024-07-15 16:14:54.833346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.262 [2024-07-15 16:14:54.833354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.262 [2024-07-15 16:14:54.833364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.262 [2024-07-15 16:14:54.833371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.262 [2024-07-15 16:14:54.833385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.262 [2024-07-15 16:14:54.833393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.262 [2024-07-15 16:14:54.833403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.262 [2024-07-15 16:14:54.833410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.262 [2024-07-15 16:14:54.833420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.262 [2024-07-15 16:14:54.833428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.262 [2024-07-15 16:14:54.833438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.262 [2024-07-15 16:14:54.833445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.262 [2024-07-15 16:14:54.833455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.262 [2024-07-15 16:14:54.833462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.262 [2024-07-15 16:14:54.833472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.262 [2024-07-15 16:14:54.833480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.262 [2024-07-15 16:14:54.833490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.262 [2024-07-15 16:14:54.833498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.262 [2024-07-15 16:14:54.833508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.262 [2024-07-15 16:14:54.833516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.262 [2024-07-15 16:14:54.833525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.262 [2024-07-15 16:14:54.833533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.262 [2024-07-15 16:14:54.833543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.262 [2024-07-15 16:14:54.833551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.262 [2024-07-15 16:14:54.833560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.262 [2024-07-15 16:14:54.833568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.262 [2024-07-15 16:14:54.833577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.833987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.833997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.834005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.834015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.834022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.834032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.834041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.834051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.834058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.834069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.834076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.263 [2024-07-15 16:14:54.834086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.263 [2024-07-15 16:14:54.834093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.834432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.834441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1480530 is same with the state(5) to be set 00:23:19.264 [2024-07-15 16:14:54.835727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.835743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.835757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.835766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.835780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.835789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.835800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.835808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.835820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.835828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.835837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.835845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.835854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.835861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.264 [2024-07-15 16:14:54.835870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.264 [2024-07-15 16:14:54.835878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.835887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.835895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.835905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.835913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.835923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.835931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.835941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.835949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.835959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.835966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.835976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.835984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.835994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.836004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.836013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.836021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.836030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.836038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.836048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.836056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.836065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.836073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.836082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.836090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.836100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.836108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.836117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.836131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.836140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.836148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.836157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.836164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.836175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.836183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.836192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.836200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.836210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.836218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.836230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.836238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.836247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.836255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.836265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.836273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.265 [2024-07-15 16:14:54.836283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.265 [2024-07-15 16:14:54.836291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.266 [2024-07-15 16:14:54.836780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.266 [2024-07-15 16:14:54.836790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.836798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.836808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.836816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.836825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.836833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.836843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.836850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.836859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.836867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.836876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14819c0 is same with the state(5) to be set 00:23:19.267 [2024-07-15 16:14:54.838176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.267 [2024-07-15 16:14:54.838590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.267 [2024-07-15 16:14:54.838598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.838990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.838998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.839008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.839016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.839026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.839034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.839043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.839051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.839061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.839073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.839083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.839091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.839100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.839108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.268 [2024-07-15 16:14:54.839118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.268 [2024-07-15 16:14:54.839130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.269 [2024-07-15 16:14:54.839141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.269 [2024-07-15 16:14:54.839148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.269 [2024-07-15 16:14:54.839157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.269 [2024-07-15 16:14:54.839165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.269 [2024-07-15 16:14:54.839175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.269 [2024-07-15 16:14:54.839183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.269 [2024-07-15 16:14:54.839192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.269 [2024-07-15 16:14:54.839199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.269 [2024-07-15 16:14:54.839209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.269 [2024-07-15 16:14:54.839216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.269 [2024-07-15 16:14:54.839226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.269 [2024-07-15 16:14:54.839234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.269 [2024-07-15 16:14:54.839243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.269 [2024-07-15 16:14:54.839251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.269 [2024-07-15 16:14:54.839260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.269 [2024-07-15 16:14:54.839269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.269 [2024-07-15 16:14:54.839278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.269 [2024-07-15 16:14:54.839285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.269 [2024-07-15 16:14:54.839297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.269 [2024-07-15 16:14:54.839304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.269 [2024-07-15 16:14:54.839313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152ff60 is same with the state(5) to be set 00:23:19.269 [2024-07-15 16:14:54.840826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:19.269 [2024-07-15 16:14:54.840850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:19.269 [2024-07-15 16:14:54.840860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:19.269 [2024-07-15 16:14:54.841524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.269 [2024-07-15 16:14:54.841565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c2ca0 with addr=10.0.0.2, port=4420 00:23:19.269 [2024-07-15 16:14:54.841576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c2ca0 is same with the state(5) to be set 00:23:19.269 [2024-07-15 16:14:54.842002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.269 [2024-07-15 16:14:54.842015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c3030 with addr=10.0.0.2, port=4420 00:23:19.269 [2024-07-15 16:14:54.842023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c3030 is same with the state(5) to be set 00:23:19.269 [2024-07-15 16:14:54.842475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.269 [2024-07-15 16:14:54.842486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16470c0 with addr=10.0.0.2, port=4420 00:23:19.269 [2024-07-15 16:14:54.842494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16470c0 is same with the state(5) to be set 00:23:19.269 [2024-07-15 16:14:54.843321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:19.269 [2024-07-15 16:14:54.843338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:19.269 [2024-07-15 16:14:54.843349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:19.269 [2024-07-15 16:14:54.843380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c2ca0 (9): Bad file descriptor 00:23:19.269 [2024-07-15 16:14:54.843391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c3030 (9): Bad file descriptor 00:23:19.269 [2024-07-15 16:14:54.843400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16470c0 (9): Bad file descriptor 00:23:19.269 [2024-07-15 16:14:54.843773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.269 [2024-07-15 16:14:54.843787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14865d0 with addr=10.0.0.2, port=4420 00:23:19.269 [2024-07-15 16:14:54.843795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14865d0 is same with the state(5) to be set 00:23:19.269 [2024-07-15 16:14:54.844224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.269 [2024-07-15 16:14:54.844237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a990 with addr=10.0.0.2, port=4420 00:23:19.269 [2024-07-15 16:14:54.844244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165a990 is same with the state(5) to be set 00:23:19.269 [2024-07-15 16:14:54.844667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.269 [2024-07-15 16:14:54.844678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1652210 with addr=10.0.0.2, port=4420 00:23:19.269 [2024-07-15 16:14:54.844690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652210 is same with the state(5) to be set 00:23:19.269 [2024-07-15 16:14:54.844698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:19.269 [2024-07-15 16:14:54.844709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:19.269 [2024-07-15 16:14:54.844718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:19.269 [2024-07-15 16:14:54.844730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:19.269 [2024-07-15 16:14:54.844737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:19.269 [2024-07-15 16:14:54.844744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:19.269 [2024-07-15 16:14:54.844755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:19.269 [2024-07-15 16:14:54.844762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:19.269 [2024-07-15 16:14:54.844768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:19.269 [2024-07-15 16:14:54.844827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.269 [2024-07-15 16:14:54.844839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.269 [2024-07-15 16:14:54.844854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.269 [2024-07-15 16:14:54.844862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.269 [2024-07-15 16:14:54.844872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.269 [2024-07-15 16:14:54.844880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.844889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.844897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.844907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.844915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.844924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.844932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.844942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.844949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.844958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.844966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.844975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.844986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.844996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.270 [2024-07-15 16:14:54.845448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.270 [2024-07-15 16:14:54.845458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.271 [2024-07-15 16:14:54.845885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.271 [2024-07-15 16:14:54.845892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.845902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.845910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.845919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.845927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.845936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.845944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.845954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.845961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.845970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152ae70 is same with the state(5) to be set 00:23:19.272 [2024-07-15 16:14:54.847268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.272 [2024-07-15 16:14:54.847937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.272 [2024-07-15 16:14:54.847944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.847953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.847960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.847970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.847978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.847988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.847995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.848394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.848402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152c300 is same with the state(5) to be set 00:23:19.273 [2024-07-15 16:14:54.849670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.849684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.849697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.849706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.849717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.849726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.849737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.849746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.849757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.849766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.849778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.849785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.849795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.849803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.849812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.849820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.849829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.849837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.849846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.849854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.849863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.849871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.849880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.849887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.849897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.849904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.849914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.849922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.849931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.849939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-07-15 16:14:54.849948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.273 [2024-07-15 16:14:54.849956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.849965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.849972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.849982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.849991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.274 [2024-07-15 16:14:54.850761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.274 [2024-07-15 16:14:54.850772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.850781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.850792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.850799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.850808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.850815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.850824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.850832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.850841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.850850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.850861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.850870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.850879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152d770 is same with the state(5) to be set 00:23:19.275 [2024-07-15 16:14:54.852146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.275 [2024-07-15 16:14:54.852638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.275 [2024-07-15 16:14:54.852646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.852985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.852996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.853003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.853014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.853022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.853031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.853040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.853049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.853058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.853069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.853077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.853088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.853096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.853105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.853113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.853134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.853142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.853153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.853162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.853172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.853180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.853189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.853196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.853207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.853214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.853225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.853232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.853242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.853250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.853261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.276 [2024-07-15 16:14:54.853268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.276 [2024-07-15 16:14:54.853277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152eaf0 is same with the state(5) to be set 00:23:19.276 [2024-07-15 16:14:54.854771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.276 [2024-07-15 16:14:54.854789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.276 [2024-07-15 16:14:54.854796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.276 [2024-07-15 16:14:54.854808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:19.276 [2024-07-15 16:14:54.854819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:19.276 [2024-07-15 16:14:54.854846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14865d0 (9): Bad file descriptor 00:23:19.276 [2024-07-15 16:14:54.854857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165a990 (9): Bad file descriptor 00:23:19.276 [2024-07-15 16:14:54.854866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1652210 (9): Bad file descriptor 00:23:19.276 [2024-07-15 16:14:54.854913] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:19.276 [2024-07-15 16:14:54.854932] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:19.276 [2024-07-15 16:14:54.854944] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:19.276 [2024-07-15 16:14:54.854954] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:19.276 [2024-07-15 16:14:54.854966] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:19.276 [2024-07-15 16:14:54.855019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:19.276 task offset: 24576 on job bdev=Nvme2n1 fails 00:23:19.276 00:23:19.276 Latency(us) 00:23:19.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.277 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.277 Job: Nvme1n1 ended in about 0.94 seconds with error 00:23:19.277 Verification LBA range: start 0x0 length 0x400 00:23:19.277 Nvme1n1 : 0.94 203.46 12.72 67.82 0.00 233222.61 20425.39 235929.60 00:23:19.277 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.277 Job: Nvme2n1 ended in about 0.94 seconds with error 00:23:19.277 Verification LBA range: start 0x0 length 0x400 00:23:19.277 Nvme2n1 : 0.94 204.16 12.76 68.05 0.00 227580.59 38229.33 253405.87 00:23:19.277 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.277 Job: Nvme3n1 ended in about 0.94 seconds with error 00:23:19.277 Verification LBA range: start 0x0 length 0x400 00:23:19.277 Nvme3n1 : 0.94 203.90 12.74 67.97 0.00 223041.92 22173.01 249910.61 00:23:19.277 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.277 Job: Nvme4n1 ended in about 0.95 seconds with error 00:23:19.277 Verification LBA range: start 0x0 length 0x400 00:23:19.277 Nvme4n1 : 0.95 201.15 12.57 67.05 0.00 221406.08 15073.28 255153.49 00:23:19.277 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.277 Job: Nvme5n1 ended in about 0.96 seconds with error 00:23:19.277 Verification LBA range: start 0x0 length 0x400 00:23:19.277 Nvme5n1 : 0.96 133.76 8.36 66.88 0.00 289605.69 21408.43 283115.52 00:23:19.277 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.277 Job: Nvme6n1 ended in about 0.97 seconds with error 00:23:19.277 Verification LBA range: start 0x0 length 0x400 00:23:19.277 Nvme6n1 : 0.97 137.68 8.61 66.25 0.00 278962.97 20425.39 260396.37 00:23:19.277 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.277 Job: Nvme7n1 ended in about 0.97 seconds with error 00:23:19.277 Verification LBA range: start 0x0 length 0x400 00:23:19.277 Nvme7n1 : 0.97 132.17 8.26 66.09 0.00 280654.51 20425.39 253405.87 00:23:19.277 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.277 Job: Nvme8n1 ended in about 0.97 seconds with error 00:23:19.277 Verification LBA range: start 0x0 length 0x400 00:23:19.277 Nvme8n1 : 0.97 131.84 8.24 65.92 0.00 274939.16 21408.43 244667.73 00:23:19.277 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.277 Job: Nvme9n1 ended in about 0.97 seconds with error 00:23:19.277 Verification LBA range: start 0x0 length 0x400 00:23:19.277 Nvme9n1 : 0.97 131.51 8.22 65.76 0.00 269453.37 24139.09 274377.39 00:23:19.277 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.277 Job: Nvme10n1 ended in about 0.96 seconds with error 00:23:19.277 Verification LBA range: start 0x0 length 0x400 00:23:19.277 Nvme10n1 : 0.96 200.13 12.51 66.71 0.00 193759.36 14199.47 251658.24 00:23:19.277 =================================================================================================================== 00:23:19.277 Total : 1679.78 104.99 668.50 0.00 245129.50 14199.47 283115.52 00:23:19.277 [2024-07-15 16:14:54.878487] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:19.277 [2024-07-15 16:14:54.878517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:19.277 [2024-07-15 16:14:54.879058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.277 [2024-07-15 16:14:54.879075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1623e90 with addr=10.0.0.2, port=4420 00:23:19.277 [2024-07-15 16:14:54.879084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1623e90 is same with the state(5) to be set 00:23:19.277 [2024-07-15 16:14:54.879358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.277 [2024-07-15 16:14:54.879369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd8340 with addr=10.0.0.2, port=4420 00:23:19.277 [2024-07-15 16:14:54.879377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd8340 is same with the state(5) to be set 00:23:19.277 [2024-07-15 16:14:54.879386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:19.277 [2024-07-15 16:14:54.879393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:19.277 [2024-07-15 16:14:54.879401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:19.277 [2024-07-15 16:14:54.879414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:19.277 [2024-07-15 16:14:54.879420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:19.277 [2024-07-15 16:14:54.879427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:19.277 [2024-07-15 16:14:54.879438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:19.277 [2024-07-15 16:14:54.879444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:19.277 [2024-07-15 16:14:54.879451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:19.277 [2024-07-15 16:14:54.880526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:19.277 [2024-07-15 16:14:54.880541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:19.277 [2024-07-15 16:14:54.880550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:19.277 [2024-07-15 16:14:54.880559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.277 [2024-07-15 16:14:54.880565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.277 [2024-07-15 16:14:54.880571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.277 [2024-07-15 16:14:54.881042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.277 [2024-07-15 16:14:54.881055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1648290 with addr=10.0.0.2, port=4420 00:23:19.277 [2024-07-15 16:14:54.881067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648290 is same with the state(5) to be set 00:23:19.277 [2024-07-15 16:14:54.881466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.277 [2024-07-15 16:14:54.881477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a91b0 with addr=10.0.0.2, port=4420 00:23:19.277 [2024-07-15 16:14:54.881485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a91b0 is same with the state(5) to be set 00:23:19.277 [2024-07-15 16:14:54.881496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1623e90 (9): Bad file descriptor 00:23:19.277 [2024-07-15 16:14:54.881506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd8340 (9): Bad file descriptor 00:23:19.277 [2024-07-15 16:14:54.881552] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:19.277 [2024-07-15 16:14:54.881564] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:19.277 [2024-07-15 16:14:54.882288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.277 [2024-07-15 16:14:54.882305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16470c0 with addr=10.0.0.2, port=4420 00:23:19.277 [2024-07-15 16:14:54.882312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16470c0 is same with the state(5) to be set 00:23:19.277 [2024-07-15 16:14:54.882617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.277 [2024-07-15 16:14:54.882627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c3030 with addr=10.0.0.2, port=4420 00:23:19.277 [2024-07-15 16:14:54.882634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c3030 is same with the state(5) to be set 00:23:19.277 [2024-07-15 16:14:54.882864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.277 [2024-07-15 16:14:54.882875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c2ca0 with addr=10.0.0.2, port=4420 00:23:19.277 [2024-07-15 16:14:54.882882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c2ca0 is same with the state(5) to be set 00:23:19.277 [2024-07-15 16:14:54.882892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1648290 (9): Bad file descriptor 00:23:19.277 [2024-07-15 16:14:54.882902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a91b0 (9): Bad file descriptor 00:23:19.277 [2024-07-15 16:14:54.882911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:19.277 [2024-07-15 16:14:54.882917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:19.277 [2024-07-15 16:14:54.882925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:19.277 [2024-07-15 16:14:54.882935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:19.277 [2024-07-15 16:14:54.882943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:19.277 [2024-07-15 16:14:54.882950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:19.277 [2024-07-15 16:14:54.883008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:19.277 [2024-07-15 16:14:54.883019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:19.277 [2024-07-15 16:14:54.883027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:19.277 [2024-07-15 16:14:54.883036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.277 [2024-07-15 16:14:54.883043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.277 [2024-07-15 16:14:54.883071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16470c0 (9): Bad file descriptor 00:23:19.277 [2024-07-15 16:14:54.883081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c3030 (9): Bad file descriptor 00:23:19.277 [2024-07-15 16:14:54.883090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c2ca0 (9): Bad file descriptor 00:23:19.277 [2024-07-15 16:14:54.883098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:19.277 [2024-07-15 16:14:54.883105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:19.277 [2024-07-15 16:14:54.883112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:19.277 [2024-07-15 16:14:54.883121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:19.277 [2024-07-15 16:14:54.883133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:19.277 [2024-07-15 16:14:54.883140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:19.277 [2024-07-15 16:14:54.883168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.277 [2024-07-15 16:14:54.883176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.277 [2024-07-15 16:14:54.883602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.277 [2024-07-15 16:14:54.883614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1652210 with addr=10.0.0.2, port=4420 00:23:19.277 [2024-07-15 16:14:54.883621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652210 is same with the state(5) to be set 00:23:19.277 [2024-07-15 16:14:54.884045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.277 [2024-07-15 16:14:54.884055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a990 with addr=10.0.0.2, port=4420 00:23:19.278 [2024-07-15 16:14:54.884062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165a990 is same with the state(5) to be set 00:23:19.278 [2024-07-15 16:14:54.884198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.278 [2024-07-15 16:14:54.884209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14865d0 with addr=10.0.0.2, port=4420 00:23:19.278 [2024-07-15 16:14:54.884216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14865d0 is same with the state(5) to be set 00:23:19.278 [2024-07-15 16:14:54.884223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:19.278 [2024-07-15 16:14:54.884229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:19.278 [2024-07-15 16:14:54.884235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:19.278 [2024-07-15 16:14:54.884244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:19.278 [2024-07-15 16:14:54.884252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:19.278 [2024-07-15 16:14:54.884259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:19.278 [2024-07-15 16:14:54.884268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:19.278 [2024-07-15 16:14:54.884274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:19.278 [2024-07-15 16:14:54.884280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:19.278 [2024-07-15 16:14:54.884309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.278 [2024-07-15 16:14:54.884320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.278 [2024-07-15 16:14:54.884326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.278 [2024-07-15 16:14:54.884334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1652210 (9): Bad file descriptor 00:23:19.278 [2024-07-15 16:14:54.884343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165a990 (9): Bad file descriptor 00:23:19.278 [2024-07-15 16:14:54.884353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14865d0 (9): Bad file descriptor 00:23:19.278 [2024-07-15 16:14:54.884378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:19.278 [2024-07-15 16:14:54.884385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:19.278 [2024-07-15 16:14:54.884392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:19.278 [2024-07-15 16:14:54.884401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:19.278 [2024-07-15 16:14:54.884408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:19.278 [2024-07-15 16:14:54.884415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:19.278 [2024-07-15 16:14:54.884424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:19.278 [2024-07-15 16:14:54.884430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:19.278 [2024-07-15 16:14:54.884438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:19.278 [2024-07-15 16:14:54.884466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.278 [2024-07-15 16:14:54.884474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.278 [2024-07-15 16:14:54.884480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.278 16:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:19.278 16:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:20.220 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2364498 00:23:20.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2364498) - No such process 00:23:20.220 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:20.220 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:20.220 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:20.220 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:20.220 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:20.480 rmmod nvme_tcp 00:23:20.480 rmmod nvme_fabrics 00:23:20.480 rmmod nvme_keyring 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:20.480 16:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.395 16:14:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:22.395 00:23:22.395 real 0m7.647s 00:23:22.395 user 0m18.263s 00:23:22.395 sys 0m1.251s 00:23:22.395 16:14:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:22.395 16:14:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:22.395 ************************************ 00:23:22.395 END TEST nvmf_shutdown_tc3 00:23:22.395 ************************************ 00:23:22.656 16:14:58 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:22.656 16:14:58 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:22.656 00:23:22.656 real 0m32.366s 00:23:22.656 user 1m15.920s 00:23:22.656 sys 0m9.193s 00:23:22.656 16:14:58 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:22.656 16:14:58 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:22.656 ************************************ 00:23:22.656 END TEST nvmf_shutdown 00:23:22.656 ************************************ 00:23:22.656 16:14:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:22.656 16:14:58 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:22.656 16:14:58 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:22.656 16:14:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:22.656 16:14:58 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:22.656 16:14:58 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:22.656 16:14:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:22.656 16:14:58 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:22.656 16:14:58 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:22.656 16:14:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:22.656 16:14:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.656 16:14:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:22.656 ************************************ 00:23:22.656 START TEST nvmf_multicontroller 00:23:22.656 ************************************ 00:23:22.656 16:14:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:22.656 * Looking for test storage... 00:23:22.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:22.656 16:14:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.656 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:22.656 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.656 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.656 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.656 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.656 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.656 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.656 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.657 16:14:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.917 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:22.917 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:22.917 16:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:22.917 16:14:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.504 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:29.504 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:29.504 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:29.505 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:29.505 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:29.505 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:29.505 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.505 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:29.506 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:29.506 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:29.506 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:29.506 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:29.506 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.506 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:29.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:23:29.767 00:23:29.767 --- 10.0.0.2 ping statistics --- 00:23:29.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.767 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:23:29.767 00:23:29.767 --- 10.0.0.1 ping statistics --- 00:23:29.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.767 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2369643 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2369643 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2369643 ']' 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.767 16:15:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.767 [2024-07-15 16:15:05.594378] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:23:29.767 [2024-07-15 16:15:05.594445] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.029 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.029 [2024-07-15 16:15:05.682327] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:30.029 [2024-07-15 16:15:05.777224] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.029 [2024-07-15 16:15:05.777282] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.029 [2024-07-15 16:15:05.777290] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.029 [2024-07-15 16:15:05.777297] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.029 [2024-07-15 16:15:05.777303] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.029 [2024-07-15 16:15:05.777437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.029 [2024-07-15 16:15:05.777610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.029 [2024-07-15 16:15:05.777610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.598 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.598 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:30.598 16:15:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.598 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:30.598 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.598 16:15:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.598 16:15:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:30.598 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.598 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.598 [2024-07-15 16:15:06.430717] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.598 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.598 16:15:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:30.598 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.598 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.859 Malloc0 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.859 [2024-07-15 16:15:06.498491] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.859 [2024-07-15 16:15:06.510430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.859 Malloc1 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2369689 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2369689 /var/tmp/bdevperf.sock 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2369689 ']' 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.859 16:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.799 NVMe0n1 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.799 1 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.799 request: 00:23:31.799 { 00:23:31.799 "name": "NVMe0", 00:23:31.799 "trtype": "tcp", 00:23:31.799 "traddr": "10.0.0.2", 00:23:31.799 "adrfam": "ipv4", 00:23:31.799 "trsvcid": "4420", 00:23:31.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.799 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:31.799 "hostaddr": "10.0.0.2", 00:23:31.799 "hostsvcid": "60000", 00:23:31.799 "prchk_reftag": false, 00:23:31.799 "prchk_guard": false, 00:23:31.799 "hdgst": false, 00:23:31.799 "ddgst": false, 00:23:31.799 "method": "bdev_nvme_attach_controller", 00:23:31.799 "req_id": 1 00:23:31.799 } 00:23:31.799 Got JSON-RPC error response 00:23:31.799 response: 00:23:31.799 { 00:23:31.799 "code": -114, 00:23:31.799 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:31.799 } 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.799 request: 00:23:31.799 { 00:23:31.799 "name": "NVMe0", 00:23:31.799 "trtype": "tcp", 00:23:31.799 "traddr": "10.0.0.2", 00:23:31.799 "adrfam": "ipv4", 00:23:31.799 "trsvcid": "4420", 00:23:31.799 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:31.799 "hostaddr": "10.0.0.2", 00:23:31.799 "hostsvcid": "60000", 00:23:31.799 "prchk_reftag": false, 00:23:31.799 "prchk_guard": false, 00:23:31.799 "hdgst": false, 00:23:31.799 "ddgst": false, 00:23:31.799 "method": "bdev_nvme_attach_controller", 00:23:31.799 "req_id": 1 00:23:31.799 } 00:23:31.799 Got JSON-RPC error response 00:23:31.799 response: 00:23:31.799 { 00:23:31.799 "code": -114, 00:23:31.799 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:31.799 } 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:31.799 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.800 request: 00:23:31.800 { 00:23:31.800 "name": "NVMe0", 00:23:31.800 "trtype": "tcp", 00:23:31.800 "traddr": "10.0.0.2", 00:23:31.800 "adrfam": "ipv4", 00:23:31.800 "trsvcid": "4420", 00:23:31.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.800 "hostaddr": "10.0.0.2", 00:23:31.800 "hostsvcid": "60000", 00:23:31.800 "prchk_reftag": false, 00:23:31.800 "prchk_guard": false, 00:23:31.800 "hdgst": false, 00:23:31.800 "ddgst": false, 00:23:31.800 "multipath": "disable", 00:23:31.800 "method": "bdev_nvme_attach_controller", 00:23:31.800 "req_id": 1 00:23:31.800 } 00:23:31.800 Got JSON-RPC error response 00:23:31.800 response: 00:23:31.800 { 00:23:31.800 "code": -114, 00:23:31.800 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:31.800 } 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.800 request: 00:23:31.800 { 00:23:31.800 "name": "NVMe0", 00:23:31.800 "trtype": "tcp", 00:23:31.800 "traddr": "10.0.0.2", 00:23:31.800 "adrfam": "ipv4", 00:23:31.800 "trsvcid": "4420", 00:23:31.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.800 "hostaddr": "10.0.0.2", 00:23:31.800 "hostsvcid": "60000", 00:23:31.800 "prchk_reftag": false, 00:23:31.800 "prchk_guard": false, 00:23:31.800 "hdgst": false, 00:23:31.800 "ddgst": false, 00:23:31.800 "multipath": "failover", 00:23:31.800 "method": "bdev_nvme_attach_controller", 00:23:31.800 "req_id": 1 00:23:31.800 } 00:23:31.800 Got JSON-RPC error response 00:23:31.800 response: 00:23:31.800 { 00:23:31.800 "code": -114, 00:23:31.800 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:31.800 } 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.800 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.062 00:23:32.062 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.062 16:15:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:32.062 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.062 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.062 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.062 16:15:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:32.062 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.062 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.322 00:23:32.322 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.322 16:15:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:32.322 16:15:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:32.322 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.322 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.322 16:15:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.322 16:15:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:32.322 16:15:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:33.260 0 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2369689 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2369689 ']' 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2369689 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2369689 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2369689' 00:23:33.520 killing process with pid 2369689 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2369689 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2369689 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:33.520 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:33.520 [2024-07-15 16:15:06.628769] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:23:33.520 [2024-07-15 16:15:06.628821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2369689 ] 00:23:33.520 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.520 [2024-07-15 16:15:06.679531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.520 [2024-07-15 16:15:06.734009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.520 [2024-07-15 16:15:07.970875] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name d882371a-cc7b-4a73-bebe-74e45641a4ed already exists 00:23:33.520 [2024-07-15 16:15:07.970904] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:d882371a-cc7b-4a73-bebe-74e45641a4ed alias for bdev NVMe1n1 00:23:33.520 [2024-07-15 16:15:07.970912] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:33.520 Running I/O for 1 seconds... 00:23:33.520 00:23:33.520 Latency(us) 00:23:33.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.520 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:33.520 NVMe0n1 : 1.01 20269.77 79.18 0.00 0.00 6297.61 4259.84 15510.19 00:23:33.520 =================================================================================================================== 00:23:33.520 Total : 20269.77 79.18 0.00 0.00 6297.61 4259.84 15510.19 00:23:33.520 Received shutdown signal, test time was about 1.000000 seconds 00:23:33.520 00:23:33.520 Latency(us) 00:23:33.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.520 =================================================================================================================== 00:23:33.520 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.520 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.520 16:15:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:33.520 rmmod nvme_tcp 00:23:33.781 rmmod nvme_fabrics 00:23:33.781 rmmod nvme_keyring 00:23:33.781 16:15:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.781 16:15:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:33.781 16:15:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:33.781 16:15:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2369643 ']' 00:23:33.781 16:15:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2369643 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2369643 ']' 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2369643 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2369643 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2369643' 00:23:33.782 killing process with pid 2369643 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2369643 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2369643 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.782 16:15:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.330 16:15:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:36.330 00:23:36.330 real 0m13.295s 00:23:36.330 user 0m16.511s 00:23:36.330 sys 0m5.934s 00:23:36.330 16:15:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:36.330 16:15:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.330 ************************************ 00:23:36.330 END TEST nvmf_multicontroller 00:23:36.330 ************************************ 00:23:36.330 16:15:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:36.330 16:15:11 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:36.330 16:15:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:36.330 16:15:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:36.330 16:15:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:36.330 ************************************ 00:23:36.330 START TEST nvmf_aer 00:23:36.330 ************************************ 00:23:36.330 16:15:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:36.330 * Looking for test storage... 00:23:36.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:36.330 16:15:11 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.330 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:36.330 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.330 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.330 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.330 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.330 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.330 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:36.331 16:15:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:43.027 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:43.027 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:43.027 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:43.027 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:43.027 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:43.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:23:43.288 00:23:43.288 --- 10.0.0.2 ping statistics --- 00:23:43.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.288 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:23:43.288 00:23:43.288 --- 10.0.0.1 ping statistics --- 00:23:43.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.288 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2374899 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2374899 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2374899 ']' 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:43.288 16:15:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:43.288 [2024-07-15 16:15:19.051902] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:23:43.288 [2024-07-15 16:15:19.051967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.288 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.288 [2024-07-15 16:15:19.123241] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:43.548 [2024-07-15 16:15:19.200411] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.548 [2024-07-15 16:15:19.200447] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.548 [2024-07-15 16:15:19.200456] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.548 [2024-07-15 16:15:19.200463] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.548 [2024-07-15 16:15:19.200469] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.548 [2024-07-15 16:15:19.200603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.548 [2024-07-15 16:15:19.200717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.548 [2024-07-15 16:15:19.200873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.548 [2024-07-15 16:15:19.200875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.118 [2024-07-15 16:15:19.882765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.118 Malloc0 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.118 [2024-07-15 16:15:19.942157] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.118 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.118 [ 00:23:44.118 { 00:23:44.118 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:44.118 "subtype": "Discovery", 00:23:44.118 "listen_addresses": [], 00:23:44.118 "allow_any_host": true, 00:23:44.118 "hosts": [] 00:23:44.118 }, 00:23:44.118 { 00:23:44.380 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.380 "subtype": "NVMe", 00:23:44.380 "listen_addresses": [ 00:23:44.380 { 00:23:44.380 "trtype": "TCP", 00:23:44.380 "adrfam": "IPv4", 00:23:44.380 "traddr": "10.0.0.2", 00:23:44.380 "trsvcid": "4420" 00:23:44.380 } 00:23:44.380 ], 00:23:44.380 "allow_any_host": true, 00:23:44.380 "hosts": [], 00:23:44.380 "serial_number": "SPDK00000000000001", 00:23:44.380 "model_number": "SPDK bdev Controller", 00:23:44.380 "max_namespaces": 2, 00:23:44.380 "min_cntlid": 1, 00:23:44.380 "max_cntlid": 65519, 00:23:44.380 "namespaces": [ 00:23:44.380 { 00:23:44.380 "nsid": 1, 00:23:44.380 "bdev_name": "Malloc0", 00:23:44.380 "name": "Malloc0", 00:23:44.380 "nguid": "C00045C6039B42A49C285C6E763A3D6E", 00:23:44.380 "uuid": "c00045c6-039b-42a4-9c28-5c6e763a3d6e" 00:23:44.380 } 00:23:44.380 ] 00:23:44.380 } 00:23:44.380 ] 00:23:44.380 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.380 16:15:19 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:44.380 16:15:19 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:44.380 16:15:19 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2375170 00:23:44.380 16:15:19 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:44.380 16:15:19 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:44.380 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:44.380 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:44.380 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:44.380 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:44.380 16:15:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:44.380 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.380 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:44.380 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:44.380 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:44.380 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:44.380 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:44.380 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:44.380 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:44.380 16:15:20 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:44.380 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.380 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.380 Malloc1 00:23:44.380 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.380 16:15:20 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:44.380 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.380 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.641 Asynchronous Event Request test 00:23:44.641 Attaching to 10.0.0.2 00:23:44.641 Attached to 10.0.0.2 00:23:44.641 Registering asynchronous event callbacks... 00:23:44.641 Starting namespace attribute notice tests for all controllers... 00:23:44.641 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:44.641 aer_cb - Changed Namespace 00:23:44.641 Cleaning up... 00:23:44.641 [ 00:23:44.641 { 00:23:44.641 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:44.641 "subtype": "Discovery", 00:23:44.641 "listen_addresses": [], 00:23:44.641 "allow_any_host": true, 00:23:44.641 "hosts": [] 00:23:44.641 }, 00:23:44.641 { 00:23:44.641 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.641 "subtype": "NVMe", 00:23:44.641 "listen_addresses": [ 00:23:44.641 { 00:23:44.641 "trtype": "TCP", 00:23:44.641 "adrfam": "IPv4", 00:23:44.641 "traddr": "10.0.0.2", 00:23:44.641 "trsvcid": "4420" 00:23:44.641 } 00:23:44.641 ], 00:23:44.641 "allow_any_host": true, 00:23:44.641 "hosts": [], 00:23:44.641 "serial_number": "SPDK00000000000001", 00:23:44.641 "model_number": "SPDK bdev Controller", 00:23:44.641 "max_namespaces": 2, 00:23:44.641 "min_cntlid": 1, 00:23:44.641 "max_cntlid": 65519, 00:23:44.641 "namespaces": [ 00:23:44.641 { 00:23:44.641 "nsid": 1, 00:23:44.641 "bdev_name": "Malloc0", 00:23:44.641 "name": "Malloc0", 00:23:44.641 "nguid": "C00045C6039B42A49C285C6E763A3D6E", 00:23:44.641 "uuid": "c00045c6-039b-42a4-9c28-5c6e763a3d6e" 00:23:44.641 }, 00:23:44.641 { 00:23:44.641 "nsid": 2, 00:23:44.641 "bdev_name": "Malloc1", 00:23:44.641 "name": "Malloc1", 00:23:44.641 "nguid": "B56C0C9AC0CA4025BD7845316AD1D3CE", 00:23:44.641 "uuid": "b56c0c9a-c0ca-4025-bd78-45316ad1d3ce" 00:23:44.641 } 00:23:44.641 ] 00:23:44.641 } 00:23:44.641 ] 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2375170 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:44.641 rmmod nvme_tcp 00:23:44.641 rmmod nvme_fabrics 00:23:44.641 rmmod nvme_keyring 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2374899 ']' 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2374899 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2374899 ']' 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2374899 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2374899 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2374899' 00:23:44.641 killing process with pid 2374899 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2374899 00:23:44.641 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2374899 00:23:44.901 16:15:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:44.901 16:15:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:44.901 16:15:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:44.901 16:15:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:44.901 16:15:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:44.901 16:15:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.901 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:44.901 16:15:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.813 16:15:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:46.813 00:23:46.813 real 0m10.860s 00:23:46.813 user 0m7.459s 00:23:46.813 sys 0m5.704s 00:23:46.813 16:15:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:46.813 16:15:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:46.813 ************************************ 00:23:46.813 END TEST nvmf_aer 00:23:46.813 ************************************ 00:23:47.075 16:15:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:47.075 16:15:22 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:47.075 16:15:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:47.075 16:15:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:47.075 16:15:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:47.075 ************************************ 00:23:47.075 START TEST nvmf_async_init 00:23:47.075 ************************************ 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:47.075 * Looking for test storage... 00:23:47.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:47.075 16:15:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=617a13b73436420ebd7757b60c989207 00:23:47.076 16:15:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:47.076 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:47.076 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.076 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:47.076 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:47.076 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:47.076 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.076 16:15:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.076 16:15:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.076 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:47.076 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:47.076 16:15:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:47.076 16:15:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:53.661 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:53.661 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:53.661 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:53.661 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.661 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:53.922 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:53.922 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.922 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.922 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.922 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.922 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:53.922 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.183 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.183 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:54.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:23:54.184 00:23:54.184 --- 10.0.0.2 ping statistics --- 00:23:54.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.184 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.374 ms 00:23:54.184 00:23:54.184 --- 10.0.0.1 ping statistics --- 00:23:54.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.184 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2379306 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2379306 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2379306 ']' 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:54.184 16:15:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.184 [2024-07-15 16:15:29.924689] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:23:54.184 [2024-07-15 16:15:29.924769] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.184 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.184 [2024-07-15 16:15:29.995665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.445 [2024-07-15 16:15:30.067954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.445 [2024-07-15 16:15:30.067991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.445 [2024-07-15 16:15:30.068000] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.445 [2024-07-15 16:15:30.068007] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.445 [2024-07-15 16:15:30.068012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.445 [2024-07-15 16:15:30.068031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.445 [2024-07-15 16:15:30.205706] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.445 null0 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 617a13b73436420ebd7757b60c989207 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.445 [2024-07-15 16:15:30.265987] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.445 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.706 nvme0n1 00:23:54.706 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.706 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:54.706 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.706 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.706 [ 00:23:54.706 { 00:23:54.706 "name": "nvme0n1", 00:23:54.706 "aliases": [ 00:23:54.706 "617a13b7-3436-420e-bd77-57b60c989207" 00:23:54.706 ], 00:23:54.706 "product_name": "NVMe disk", 00:23:54.706 "block_size": 512, 00:23:54.706 "num_blocks": 2097152, 00:23:54.706 "uuid": "617a13b7-3436-420e-bd77-57b60c989207", 00:23:54.706 "assigned_rate_limits": { 00:23:54.706 "rw_ios_per_sec": 0, 00:23:54.706 "rw_mbytes_per_sec": 0, 00:23:54.706 "r_mbytes_per_sec": 0, 00:23:54.706 "w_mbytes_per_sec": 0 00:23:54.706 }, 00:23:54.706 "claimed": false, 00:23:54.706 "zoned": false, 00:23:54.706 "supported_io_types": { 00:23:54.706 "read": true, 00:23:54.706 "write": true, 00:23:54.706 "unmap": false, 00:23:54.706 "flush": true, 00:23:54.706 "reset": true, 00:23:54.706 "nvme_admin": true, 00:23:54.706 "nvme_io": true, 00:23:54.706 "nvme_io_md": false, 00:23:54.706 "write_zeroes": true, 00:23:54.706 "zcopy": false, 00:23:54.706 "get_zone_info": false, 00:23:54.706 "zone_management": false, 00:23:54.706 "zone_append": false, 00:23:54.706 "compare": true, 00:23:54.706 "compare_and_write": true, 00:23:54.706 "abort": true, 00:23:54.706 "seek_hole": false, 00:23:54.706 "seek_data": false, 00:23:54.706 "copy": true, 00:23:54.706 "nvme_iov_md": false 00:23:54.706 }, 00:23:54.706 "memory_domains": [ 00:23:54.706 { 00:23:54.706 "dma_device_id": "system", 00:23:54.706 "dma_device_type": 1 00:23:54.706 } 00:23:54.706 ], 00:23:54.706 "driver_specific": { 00:23:54.706 "nvme": [ 00:23:54.706 { 00:23:54.706 "trid": { 00:23:54.706 "trtype": "TCP", 00:23:54.706 "adrfam": "IPv4", 00:23:54.706 "traddr": "10.0.0.2", 00:23:54.706 "trsvcid": "4420", 00:23:54.706 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:54.706 }, 00:23:54.706 "ctrlr_data": { 00:23:54.706 "cntlid": 1, 00:23:54.706 "vendor_id": "0x8086", 00:23:54.706 "model_number": "SPDK bdev Controller", 00:23:54.706 "serial_number": "00000000000000000000", 00:23:54.706 "firmware_revision": "24.09", 00:23:54.706 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:54.706 "oacs": { 00:23:54.706 "security": 0, 00:23:54.706 "format": 0, 00:23:54.706 "firmware": 0, 00:23:54.706 "ns_manage": 0 00:23:54.706 }, 00:23:54.706 "multi_ctrlr": true, 00:23:54.706 "ana_reporting": false 00:23:54.706 }, 00:23:54.706 "vs": { 00:23:54.706 "nvme_version": "1.3" 00:23:54.706 }, 00:23:54.706 "ns_data": { 00:23:54.706 "id": 1, 00:23:54.706 "can_share": true 00:23:54.706 } 00:23:54.706 } 00:23:54.706 ], 00:23:54.706 "mp_policy": "active_passive" 00:23:54.706 } 00:23:54.706 } 00:23:54.706 ] 00:23:54.706 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.706 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:54.706 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.706 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.706 [2024-07-15 16:15:30.542542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.706 [2024-07-15 16:15:30.542604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2135df0 (9): Bad file descriptor 00:23:54.967 [2024-07-15 16:15:30.674223] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:54.967 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.967 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:54.967 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.967 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.967 [ 00:23:54.967 { 00:23:54.967 "name": "nvme0n1", 00:23:54.967 "aliases": [ 00:23:54.967 "617a13b7-3436-420e-bd77-57b60c989207" 00:23:54.967 ], 00:23:54.967 "product_name": "NVMe disk", 00:23:54.967 "block_size": 512, 00:23:54.967 "num_blocks": 2097152, 00:23:54.967 "uuid": "617a13b7-3436-420e-bd77-57b60c989207", 00:23:54.967 "assigned_rate_limits": { 00:23:54.967 "rw_ios_per_sec": 0, 00:23:54.967 "rw_mbytes_per_sec": 0, 00:23:54.967 "r_mbytes_per_sec": 0, 00:23:54.967 "w_mbytes_per_sec": 0 00:23:54.967 }, 00:23:54.967 "claimed": false, 00:23:54.967 "zoned": false, 00:23:54.967 "supported_io_types": { 00:23:54.967 "read": true, 00:23:54.967 "write": true, 00:23:54.967 "unmap": false, 00:23:54.967 "flush": true, 00:23:54.967 "reset": true, 00:23:54.967 "nvme_admin": true, 00:23:54.967 "nvme_io": true, 00:23:54.967 "nvme_io_md": false, 00:23:54.967 "write_zeroes": true, 00:23:54.967 "zcopy": false, 00:23:54.967 "get_zone_info": false, 00:23:54.967 "zone_management": false, 00:23:54.967 "zone_append": false, 00:23:54.967 "compare": true, 00:23:54.967 "compare_and_write": true, 00:23:54.967 "abort": true, 00:23:54.967 "seek_hole": false, 00:23:54.967 "seek_data": false, 00:23:54.967 "copy": true, 00:23:54.967 "nvme_iov_md": false 00:23:54.967 }, 00:23:54.967 "memory_domains": [ 00:23:54.967 { 00:23:54.967 "dma_device_id": "system", 00:23:54.967 "dma_device_type": 1 00:23:54.967 } 00:23:54.967 ], 00:23:54.967 "driver_specific": { 00:23:54.967 "nvme": [ 00:23:54.967 { 00:23:54.967 "trid": { 00:23:54.967 "trtype": "TCP", 00:23:54.967 "adrfam": "IPv4", 00:23:54.967 "traddr": "10.0.0.2", 00:23:54.967 "trsvcid": "4420", 00:23:54.967 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:54.967 }, 00:23:54.967 "ctrlr_data": { 00:23:54.967 "cntlid": 2, 00:23:54.967 "vendor_id": "0x8086", 00:23:54.967 "model_number": "SPDK bdev Controller", 00:23:54.967 "serial_number": "00000000000000000000", 00:23:54.967 "firmware_revision": "24.09", 00:23:54.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:54.967 "oacs": { 00:23:54.967 "security": 0, 00:23:54.967 "format": 0, 00:23:54.967 "firmware": 0, 00:23:54.967 "ns_manage": 0 00:23:54.967 }, 00:23:54.967 "multi_ctrlr": true, 00:23:54.967 "ana_reporting": false 00:23:54.967 }, 00:23:54.967 "vs": { 00:23:54.967 "nvme_version": "1.3" 00:23:54.967 }, 00:23:54.967 "ns_data": { 00:23:54.967 "id": 1, 00:23:54.967 "can_share": true 00:23:54.967 } 00:23:54.967 } 00:23:54.967 ], 00:23:54.967 "mp_policy": "active_passive" 00:23:54.967 } 00:23:54.967 } 00:23:54.967 ] 00:23:54.967 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.967 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.967 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.967 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.967 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.967 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:54.967 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.nJ3SUB26Rd 00:23:54.967 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:54.967 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.nJ3SUB26Rd 00:23:54.967 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:54.968 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.968 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.968 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.968 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:54.968 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.968 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.968 [2024-07-15 16:15:30.747183] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:54.968 [2024-07-15 16:15:30.747299] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:54.968 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.968 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nJ3SUB26Rd 00:23:54.968 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.968 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.968 [2024-07-15 16:15:30.759206] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:54.968 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.968 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nJ3SUB26Rd 00:23:54.968 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.968 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.968 [2024-07-15 16:15:30.771257] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:54.968 [2024-07-15 16:15:30.771292] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:55.229 nvme0n1 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:55.229 [ 00:23:55.229 { 00:23:55.229 "name": "nvme0n1", 00:23:55.229 "aliases": [ 00:23:55.229 "617a13b7-3436-420e-bd77-57b60c989207" 00:23:55.229 ], 00:23:55.229 "product_name": "NVMe disk", 00:23:55.229 "block_size": 512, 00:23:55.229 "num_blocks": 2097152, 00:23:55.229 "uuid": "617a13b7-3436-420e-bd77-57b60c989207", 00:23:55.229 "assigned_rate_limits": { 00:23:55.229 "rw_ios_per_sec": 0, 00:23:55.229 "rw_mbytes_per_sec": 0, 00:23:55.229 "r_mbytes_per_sec": 0, 00:23:55.229 "w_mbytes_per_sec": 0 00:23:55.229 }, 00:23:55.229 "claimed": false, 00:23:55.229 "zoned": false, 00:23:55.229 "supported_io_types": { 00:23:55.229 "read": true, 00:23:55.229 "write": true, 00:23:55.229 "unmap": false, 00:23:55.229 "flush": true, 00:23:55.229 "reset": true, 00:23:55.229 "nvme_admin": true, 00:23:55.229 "nvme_io": true, 00:23:55.229 "nvme_io_md": false, 00:23:55.229 "write_zeroes": true, 00:23:55.229 "zcopy": false, 00:23:55.229 "get_zone_info": false, 00:23:55.229 "zone_management": false, 00:23:55.229 "zone_append": false, 00:23:55.229 "compare": true, 00:23:55.229 "compare_and_write": true, 00:23:55.229 "abort": true, 00:23:55.229 "seek_hole": false, 00:23:55.229 "seek_data": false, 00:23:55.229 "copy": true, 00:23:55.229 "nvme_iov_md": false 00:23:55.229 }, 00:23:55.229 "memory_domains": [ 00:23:55.229 { 00:23:55.229 "dma_device_id": "system", 00:23:55.229 "dma_device_type": 1 00:23:55.229 } 00:23:55.229 ], 00:23:55.229 "driver_specific": { 00:23:55.229 "nvme": [ 00:23:55.229 { 00:23:55.229 "trid": { 00:23:55.229 "trtype": "TCP", 00:23:55.229 "adrfam": "IPv4", 00:23:55.229 "traddr": "10.0.0.2", 00:23:55.229 "trsvcid": "4421", 00:23:55.229 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:55.229 }, 00:23:55.229 "ctrlr_data": { 00:23:55.229 "cntlid": 3, 00:23:55.229 "vendor_id": "0x8086", 00:23:55.229 "model_number": "SPDK bdev Controller", 00:23:55.229 "serial_number": "00000000000000000000", 00:23:55.229 "firmware_revision": "24.09", 00:23:55.229 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:55.229 "oacs": { 00:23:55.229 "security": 0, 00:23:55.229 "format": 0, 00:23:55.229 "firmware": 0, 00:23:55.229 "ns_manage": 0 00:23:55.229 }, 00:23:55.229 "multi_ctrlr": true, 00:23:55.229 "ana_reporting": false 00:23:55.229 }, 00:23:55.229 "vs": { 00:23:55.229 "nvme_version": "1.3" 00:23:55.229 }, 00:23:55.229 "ns_data": { 00:23:55.229 "id": 1, 00:23:55.229 "can_share": true 00:23:55.229 } 00:23:55.229 } 00:23:55.229 ], 00:23:55.229 "mp_policy": "active_passive" 00:23:55.229 } 00:23:55.229 } 00:23:55.229 ] 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.nJ3SUB26Rd 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:55.229 rmmod nvme_tcp 00:23:55.229 rmmod nvme_fabrics 00:23:55.229 rmmod nvme_keyring 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2379306 ']' 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2379306 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2379306 ']' 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2379306 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.229 16:15:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2379306 00:23:55.229 16:15:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:55.229 16:15:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:55.229 16:15:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2379306' 00:23:55.229 killing process with pid 2379306 00:23:55.229 16:15:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2379306 00:23:55.229 [2024-07-15 16:15:31.020795] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:55.229 [2024-07-15 16:15:31.020821] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:55.230 16:15:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2379306 00:23:55.491 16:15:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:55.491 16:15:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:55.491 16:15:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:55.491 16:15:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:55.491 16:15:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:55.491 16:15:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.491 16:15:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.491 16:15:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.403 16:15:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:57.403 00:23:57.403 real 0m10.509s 00:23:57.403 user 0m3.436s 00:23:57.403 sys 0m5.453s 00:23:57.403 16:15:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:57.403 16:15:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.403 ************************************ 00:23:57.403 END TEST nvmf_async_init 00:23:57.403 ************************************ 00:23:57.664 16:15:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:57.664 16:15:33 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:57.664 16:15:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:57.664 16:15:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:57.664 16:15:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:57.664 ************************************ 00:23:57.664 START TEST dma 00:23:57.664 ************************************ 00:23:57.664 16:15:33 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:57.664 * Looking for test storage... 00:23:57.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:57.664 16:15:33 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.664 16:15:33 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.664 16:15:33 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.664 16:15:33 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.664 16:15:33 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.664 16:15:33 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.664 16:15:33 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.664 16:15:33 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:57.664 16:15:33 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.664 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:57.665 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:57.665 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:57.665 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.665 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.665 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.665 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:57.665 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:57.665 16:15:33 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:57.665 16:15:33 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:57.665 16:15:33 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:57.665 00:23:57.665 real 0m0.110s 00:23:57.665 user 0m0.049s 00:23:57.665 sys 0m0.068s 00:23:57.665 16:15:33 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:57.665 16:15:33 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:57.665 ************************************ 00:23:57.665 END TEST dma 00:23:57.665 ************************************ 00:23:57.665 16:15:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:57.665 16:15:33 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:57.665 16:15:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:57.665 16:15:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:57.665 16:15:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:57.665 ************************************ 00:23:57.665 START TEST nvmf_identify 00:23:57.665 ************************************ 00:23:57.665 16:15:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:57.926 * Looking for test storage... 00:23:57.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.926 16:15:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:57.927 16:15:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:04.516 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:04.516 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:04.516 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:04.516 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:04.516 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.517 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:04.517 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:04.517 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:04.517 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:04.517 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:04.517 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:04.517 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:04.517 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.517 16:15:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:04.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:24:04.517 00:24:04.517 --- 10.0.0.2 ping statistics --- 00:24:04.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.517 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:04.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:24:04.517 00:24:04.517 --- 10.0.0.1 ping statistics --- 00:24:04.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.517 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2383576 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2383576 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2383576 ']' 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:04.517 16:15:40 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:04.778 [2024-07-15 16:15:40.404839] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:24:04.778 [2024-07-15 16:15:40.404930] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.778 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.778 [2024-07-15 16:15:40.478633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:04.778 [2024-07-15 16:15:40.556250] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.778 [2024-07-15 16:15:40.556290] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.778 [2024-07-15 16:15:40.556298] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.778 [2024-07-15 16:15:40.556304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.778 [2024-07-15 16:15:40.556310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.778 [2024-07-15 16:15:40.556400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.778 [2024-07-15 16:15:40.556515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.778 [2024-07-15 16:15:40.556672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.778 [2024-07-15 16:15:40.556673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.408 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.408 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:05.408 16:15:41 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:05.408 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.408 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.408 [2024-07-15 16:15:41.195602] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.675 Malloc0 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.675 [2024-07-15 16:15:41.291067] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.675 [ 00:24:05.675 { 00:24:05.675 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:05.675 "subtype": "Discovery", 00:24:05.675 "listen_addresses": [ 00:24:05.675 { 00:24:05.675 "trtype": "TCP", 00:24:05.675 "adrfam": "IPv4", 00:24:05.675 "traddr": "10.0.0.2", 00:24:05.675 "trsvcid": "4420" 00:24:05.675 } 00:24:05.675 ], 00:24:05.675 "allow_any_host": true, 00:24:05.675 "hosts": [] 00:24:05.675 }, 00:24:05.675 { 00:24:05.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.675 "subtype": "NVMe", 00:24:05.675 "listen_addresses": [ 00:24:05.675 { 00:24:05.675 "trtype": "TCP", 00:24:05.675 "adrfam": "IPv4", 00:24:05.675 "traddr": "10.0.0.2", 00:24:05.675 "trsvcid": "4420" 00:24:05.675 } 00:24:05.675 ], 00:24:05.675 "allow_any_host": true, 00:24:05.675 "hosts": [], 00:24:05.675 "serial_number": "SPDK00000000000001", 00:24:05.675 "model_number": "SPDK bdev Controller", 00:24:05.675 "max_namespaces": 32, 00:24:05.675 "min_cntlid": 1, 00:24:05.675 "max_cntlid": 65519, 00:24:05.675 "namespaces": [ 00:24:05.675 { 00:24:05.675 "nsid": 1, 00:24:05.675 "bdev_name": "Malloc0", 00:24:05.675 "name": "Malloc0", 00:24:05.675 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:05.675 "eui64": "ABCDEF0123456789", 00:24:05.675 "uuid": "997e0883-9575-455f-b60a-8dffc0dfa446" 00:24:05.675 } 00:24:05.675 ] 00:24:05.675 } 00:24:05.675 ] 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.675 16:15:41 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:05.675 [2024-07-15 16:15:41.352981] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:24:05.675 [2024-07-15 16:15:41.353024] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383908 ] 00:24:05.675 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.675 [2024-07-15 16:15:41.386776] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:05.675 [2024-07-15 16:15:41.386826] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:05.675 [2024-07-15 16:15:41.386831] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:05.675 [2024-07-15 16:15:41.386841] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:05.675 [2024-07-15 16:15:41.386848] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:05.675 [2024-07-15 16:15:41.387369] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:05.675 [2024-07-15 16:15:41.387402] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x230aec0 0 00:24:05.675 [2024-07-15 16:15:41.398133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:05.675 [2024-07-15 16:15:41.398145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:05.675 [2024-07-15 16:15:41.398150] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:05.675 [2024-07-15 16:15:41.398153] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:05.675 [2024-07-15 16:15:41.398188] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.675 [2024-07-15 16:15:41.398194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.675 [2024-07-15 16:15:41.398198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230aec0) 00:24:05.675 [2024-07-15 16:15:41.398211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:05.675 [2024-07-15 16:15:41.398227] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238de40, cid 0, qid 0 00:24:05.675 [2024-07-15 16:15:41.405132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.675 [2024-07-15 16:15:41.405141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.675 [2024-07-15 16:15:41.405145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.675 [2024-07-15 16:15:41.405149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238de40) on tqpair=0x230aec0 00:24:05.675 [2024-07-15 16:15:41.405159] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:05.675 [2024-07-15 16:15:41.405166] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:05.675 [2024-07-15 16:15:41.405171] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:05.675 [2024-07-15 16:15:41.405184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.675 [2024-07-15 16:15:41.405188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.405191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230aec0) 00:24:05.676 [2024-07-15 16:15:41.405199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.676 [2024-07-15 16:15:41.405212] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238de40, cid 0, qid 0 00:24:05.676 [2024-07-15 16:15:41.405443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.676 [2024-07-15 16:15:41.405450] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.676 [2024-07-15 16:15:41.405453] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.405457] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238de40) on tqpair=0x230aec0 00:24:05.676 [2024-07-15 16:15:41.405462] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:05.676 [2024-07-15 16:15:41.405470] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:05.676 [2024-07-15 16:15:41.405477] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.405480] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.405484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230aec0) 00:24:05.676 [2024-07-15 16:15:41.405490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.676 [2024-07-15 16:15:41.405501] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238de40, cid 0, qid 0 00:24:05.676 [2024-07-15 16:15:41.405709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.676 [2024-07-15 16:15:41.405716] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.676 [2024-07-15 16:15:41.405719] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.405723] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238de40) on tqpair=0x230aec0 00:24:05.676 [2024-07-15 16:15:41.405731] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:05.676 [2024-07-15 16:15:41.405739] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:05.676 [2024-07-15 16:15:41.405745] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.405749] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.405752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230aec0) 00:24:05.676 [2024-07-15 16:15:41.405759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.676 [2024-07-15 16:15:41.405769] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238de40, cid 0, qid 0 00:24:05.676 [2024-07-15 16:15:41.405956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.676 [2024-07-15 16:15:41.405962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.676 [2024-07-15 16:15:41.405965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.405969] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238de40) on tqpair=0x230aec0 00:24:05.676 [2024-07-15 16:15:41.405974] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:05.676 [2024-07-15 16:15:41.405983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.405987] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.405990] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230aec0) 00:24:05.676 [2024-07-15 16:15:41.405997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.676 [2024-07-15 16:15:41.406007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238de40, cid 0, qid 0 00:24:05.676 [2024-07-15 16:15:41.406191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.676 [2024-07-15 16:15:41.406198] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.676 [2024-07-15 16:15:41.406202] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.406205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238de40) on tqpair=0x230aec0 00:24:05.676 [2024-07-15 16:15:41.406210] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:05.676 [2024-07-15 16:15:41.406214] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:05.676 [2024-07-15 16:15:41.406222] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:05.676 [2024-07-15 16:15:41.406327] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:05.676 [2024-07-15 16:15:41.406332] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:05.676 [2024-07-15 16:15:41.406339] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.406343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.406347] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230aec0) 00:24:05.676 [2024-07-15 16:15:41.406353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.676 [2024-07-15 16:15:41.406364] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238de40, cid 0, qid 0 00:24:05.676 [2024-07-15 16:15:41.406590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.676 [2024-07-15 16:15:41.406598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.676 [2024-07-15 16:15:41.406602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.406605] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238de40) on tqpair=0x230aec0 00:24:05.676 [2024-07-15 16:15:41.406610] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:05.676 [2024-07-15 16:15:41.406619] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.406623] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.406626] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230aec0) 00:24:05.676 [2024-07-15 16:15:41.406633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.676 [2024-07-15 16:15:41.406643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238de40, cid 0, qid 0 00:24:05.676 [2024-07-15 16:15:41.406868] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.676 [2024-07-15 16:15:41.406874] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.676 [2024-07-15 16:15:41.406878] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.406882] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238de40) on tqpair=0x230aec0 00:24:05.676 [2024-07-15 16:15:41.406886] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:05.676 [2024-07-15 16:15:41.406891] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:05.676 [2024-07-15 16:15:41.406898] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:05.676 [2024-07-15 16:15:41.406906] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:05.676 [2024-07-15 16:15:41.406915] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.406919] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230aec0) 00:24:05.676 [2024-07-15 16:15:41.406925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.676 [2024-07-15 16:15:41.406935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238de40, cid 0, qid 0 00:24:05.676 [2024-07-15 16:15:41.407164] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.676 [2024-07-15 16:15:41.407171] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.676 [2024-07-15 16:15:41.407175] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.407179] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x230aec0): datao=0, datal=4096, cccid=0 00:24:05.676 [2024-07-15 16:15:41.407183] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238de40) on tqpair(0x230aec0): expected_datao=0, payload_size=4096 00:24:05.676 [2024-07-15 16:15:41.407188] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.407233] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.407237] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.451131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.676 [2024-07-15 16:15:41.451141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.676 [2024-07-15 16:15:41.451145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.451149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238de40) on tqpair=0x230aec0 00:24:05.676 [2024-07-15 16:15:41.451157] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:05.676 [2024-07-15 16:15:41.451168] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:05.676 [2024-07-15 16:15:41.451173] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:05.676 [2024-07-15 16:15:41.451178] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:05.676 [2024-07-15 16:15:41.451182] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:05.676 [2024-07-15 16:15:41.451187] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:05.676 [2024-07-15 16:15:41.451195] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:05.676 [2024-07-15 16:15:41.451202] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.451206] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.676 [2024-07-15 16:15:41.451210] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230aec0) 00:24:05.676 [2024-07-15 16:15:41.451218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:05.676 [2024-07-15 16:15:41.451231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238de40, cid 0, qid 0 00:24:05.676 [2024-07-15 16:15:41.451426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.676 [2024-07-15 16:15:41.451433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.676 [2024-07-15 16:15:41.451436] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.451440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238de40) on tqpair=0x230aec0 00:24:05.677 [2024-07-15 16:15:41.451447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.451451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.451455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230aec0) 00:24:05.677 [2024-07-15 16:15:41.451461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.677 [2024-07-15 16:15:41.451467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.451470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.451474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x230aec0) 00:24:05.677 [2024-07-15 16:15:41.451480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.677 [2024-07-15 16:15:41.451486] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.451489] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.451493] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x230aec0) 00:24:05.677 [2024-07-15 16:15:41.451498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.677 [2024-07-15 16:15:41.451504] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.451508] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.451511] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x230aec0) 00:24:05.677 [2024-07-15 16:15:41.451516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.677 [2024-07-15 16:15:41.451521] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:05.677 [2024-07-15 16:15:41.451534] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:05.677 [2024-07-15 16:15:41.451541] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.451545] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x230aec0) 00:24:05.677 [2024-07-15 16:15:41.451551] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.677 [2024-07-15 16:15:41.451563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238de40, cid 0, qid 0 00:24:05.677 [2024-07-15 16:15:41.451568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238dfc0, cid 1, qid 0 00:24:05.677 [2024-07-15 16:15:41.451573] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e140, cid 2, qid 0 00:24:05.677 [2024-07-15 16:15:41.451578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e2c0, cid 3, qid 0 00:24:05.677 [2024-07-15 16:15:41.451582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e440, cid 4, qid 0 00:24:05.677 [2024-07-15 16:15:41.451824] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.677 [2024-07-15 16:15:41.451831] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.677 [2024-07-15 16:15:41.451834] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.451838] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238e440) on tqpair=0x230aec0 00:24:05.677 [2024-07-15 16:15:41.451843] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:05.677 [2024-07-15 16:15:41.451848] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:05.677 [2024-07-15 16:15:41.451858] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.451862] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x230aec0) 00:24:05.677 [2024-07-15 16:15:41.451868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.677 [2024-07-15 16:15:41.451878] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e440, cid 4, qid 0 00:24:05.677 [2024-07-15 16:15:41.452078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.677 [2024-07-15 16:15:41.452084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.677 [2024-07-15 16:15:41.452088] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.452092] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x230aec0): datao=0, datal=4096, cccid=4 00:24:05.677 [2024-07-15 16:15:41.452096] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238e440) on tqpair(0x230aec0): expected_datao=0, payload_size=4096 00:24:05.677 [2024-07-15 16:15:41.452100] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.452107] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.452111] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.452261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.677 [2024-07-15 16:15:41.452268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.677 [2024-07-15 16:15:41.452271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.452275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238e440) on tqpair=0x230aec0 00:24:05.677 [2024-07-15 16:15:41.452286] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:05.677 [2024-07-15 16:15:41.452309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.452313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x230aec0) 00:24:05.677 [2024-07-15 16:15:41.452322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.677 [2024-07-15 16:15:41.452329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.452332] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.452336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x230aec0) 00:24:05.677 [2024-07-15 16:15:41.452342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.677 [2024-07-15 16:15:41.452355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e440, cid 4, qid 0 00:24:05.677 [2024-07-15 16:15:41.452361] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e5c0, cid 5, qid 0 00:24:05.677 [2024-07-15 16:15:41.452609] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.677 [2024-07-15 16:15:41.452616] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.677 [2024-07-15 16:15:41.452619] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.452623] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x230aec0): datao=0, datal=1024, cccid=4 00:24:05.677 [2024-07-15 16:15:41.452627] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238e440) on tqpair(0x230aec0): expected_datao=0, payload_size=1024 00:24:05.677 [2024-07-15 16:15:41.452631] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.452638] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.452641] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.452647] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.677 [2024-07-15 16:15:41.452653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.677 [2024-07-15 16:15:41.452656] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.452660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238e5c0) on tqpair=0x230aec0 00:24:05.677 [2024-07-15 16:15:41.493356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.677 [2024-07-15 16:15:41.493369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.677 [2024-07-15 16:15:41.493373] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.493377] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238e440) on tqpair=0x230aec0 00:24:05.677 [2024-07-15 16:15:41.493396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.493400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x230aec0) 00:24:05.677 [2024-07-15 16:15:41.493407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.677 [2024-07-15 16:15:41.493422] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e440, cid 4, qid 0 00:24:05.677 [2024-07-15 16:15:41.493597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.677 [2024-07-15 16:15:41.493604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.677 [2024-07-15 16:15:41.493607] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.493611] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x230aec0): datao=0, datal=3072, cccid=4 00:24:05.677 [2024-07-15 16:15:41.493615] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238e440) on tqpair(0x230aec0): expected_datao=0, payload_size=3072 00:24:05.677 [2024-07-15 16:15:41.493619] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.493626] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.493630] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.493764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.677 [2024-07-15 16:15:41.493770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.677 [2024-07-15 16:15:41.493776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.493780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238e440) on tqpair=0x230aec0 00:24:05.677 [2024-07-15 16:15:41.493788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.493792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x230aec0) 00:24:05.677 [2024-07-15 16:15:41.493798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.677 [2024-07-15 16:15:41.493811] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e440, cid 4, qid 0 00:24:05.677 [2024-07-15 16:15:41.494023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.677 [2024-07-15 16:15:41.494030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.677 [2024-07-15 16:15:41.494033] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.677 [2024-07-15 16:15:41.494037] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x230aec0): datao=0, datal=8, cccid=4 00:24:05.677 [2024-07-15 16:15:41.494041] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238e440) on tqpair(0x230aec0): expected_datao=0, payload_size=8 00:24:05.677 [2024-07-15 16:15:41.494045] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.678 [2024-07-15 16:15:41.494052] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.678 [2024-07-15 16:15:41.494055] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.942 [2024-07-15 16:15:41.534348] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.942 [2024-07-15 16:15:41.534359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.942 [2024-07-15 16:15:41.534363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.942 [2024-07-15 16:15:41.534367] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238e440) on tqpair=0x230aec0 00:24:05.942 ===================================================== 00:24:05.942 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:05.942 ===================================================== 00:24:05.943 Controller Capabilities/Features 00:24:05.943 ================================ 00:24:05.943 Vendor ID: 0000 00:24:05.943 Subsystem Vendor ID: 0000 00:24:05.943 Serial Number: .................... 00:24:05.943 Model Number: ........................................ 00:24:05.943 Firmware Version: 24.09 00:24:05.943 Recommended Arb Burst: 0 00:24:05.943 IEEE OUI Identifier: 00 00 00 00:24:05.943 Multi-path I/O 00:24:05.943 May have multiple subsystem ports: No 00:24:05.943 May have multiple controllers: No 00:24:05.943 Associated with SR-IOV VF: No 00:24:05.943 Max Data Transfer Size: 131072 00:24:05.943 Max Number of Namespaces: 0 00:24:05.943 Max Number of I/O Queues: 1024 00:24:05.943 NVMe Specification Version (VS): 1.3 00:24:05.943 NVMe Specification Version (Identify): 1.3 00:24:05.943 Maximum Queue Entries: 128 00:24:05.943 Contiguous Queues Required: Yes 00:24:05.943 Arbitration Mechanisms Supported 00:24:05.943 Weighted Round Robin: Not Supported 00:24:05.943 Vendor Specific: Not Supported 00:24:05.943 Reset Timeout: 15000 ms 00:24:05.943 Doorbell Stride: 4 bytes 00:24:05.943 NVM Subsystem Reset: Not Supported 00:24:05.943 Command Sets Supported 00:24:05.943 NVM Command Set: Supported 00:24:05.943 Boot Partition: Not Supported 00:24:05.943 Memory Page Size Minimum: 4096 bytes 00:24:05.943 Memory Page Size Maximum: 4096 bytes 00:24:05.943 Persistent Memory Region: Not Supported 00:24:05.943 Optional Asynchronous Events Supported 00:24:05.943 Namespace Attribute Notices: Not Supported 00:24:05.943 Firmware Activation Notices: Not Supported 00:24:05.943 ANA Change Notices: Not Supported 00:24:05.943 PLE Aggregate Log Change Notices: Not Supported 00:24:05.943 LBA Status Info Alert Notices: Not Supported 00:24:05.943 EGE Aggregate Log Change Notices: Not Supported 00:24:05.943 Normal NVM Subsystem Shutdown event: Not Supported 00:24:05.943 Zone Descriptor Change Notices: Not Supported 00:24:05.943 Discovery Log Change Notices: Supported 00:24:05.943 Controller Attributes 00:24:05.943 128-bit Host Identifier: Not Supported 00:24:05.943 Non-Operational Permissive Mode: Not Supported 00:24:05.943 NVM Sets: Not Supported 00:24:05.943 Read Recovery Levels: Not Supported 00:24:05.943 Endurance Groups: Not Supported 00:24:05.943 Predictable Latency Mode: Not Supported 00:24:05.943 Traffic Based Keep ALive: Not Supported 00:24:05.943 Namespace Granularity: Not Supported 00:24:05.943 SQ Associations: Not Supported 00:24:05.943 UUID List: Not Supported 00:24:05.943 Multi-Domain Subsystem: Not Supported 00:24:05.943 Fixed Capacity Management: Not Supported 00:24:05.943 Variable Capacity Management: Not Supported 00:24:05.943 Delete Endurance Group: Not Supported 00:24:05.943 Delete NVM Set: Not Supported 00:24:05.943 Extended LBA Formats Supported: Not Supported 00:24:05.943 Flexible Data Placement Supported: Not Supported 00:24:05.943 00:24:05.943 Controller Memory Buffer Support 00:24:05.943 ================================ 00:24:05.943 Supported: No 00:24:05.943 00:24:05.943 Persistent Memory Region Support 00:24:05.943 ================================ 00:24:05.943 Supported: No 00:24:05.943 00:24:05.943 Admin Command Set Attributes 00:24:05.943 ============================ 00:24:05.943 Security Send/Receive: Not Supported 00:24:05.943 Format NVM: Not Supported 00:24:05.943 Firmware Activate/Download: Not Supported 00:24:05.943 Namespace Management: Not Supported 00:24:05.943 Device Self-Test: Not Supported 00:24:05.943 Directives: Not Supported 00:24:05.943 NVMe-MI: Not Supported 00:24:05.943 Virtualization Management: Not Supported 00:24:05.943 Doorbell Buffer Config: Not Supported 00:24:05.943 Get LBA Status Capability: Not Supported 00:24:05.943 Command & Feature Lockdown Capability: Not Supported 00:24:05.943 Abort Command Limit: 1 00:24:05.943 Async Event Request Limit: 4 00:24:05.943 Number of Firmware Slots: N/A 00:24:05.943 Firmware Slot 1 Read-Only: N/A 00:24:05.943 Firmware Activation Without Reset: N/A 00:24:05.943 Multiple Update Detection Support: N/A 00:24:05.943 Firmware Update Granularity: No Information Provided 00:24:05.943 Per-Namespace SMART Log: No 00:24:05.943 Asymmetric Namespace Access Log Page: Not Supported 00:24:05.943 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:05.943 Command Effects Log Page: Not Supported 00:24:05.943 Get Log Page Extended Data: Supported 00:24:05.943 Telemetry Log Pages: Not Supported 00:24:05.943 Persistent Event Log Pages: Not Supported 00:24:05.943 Supported Log Pages Log Page: May Support 00:24:05.943 Commands Supported & Effects Log Page: Not Supported 00:24:05.943 Feature Identifiers & Effects Log Page:May Support 00:24:05.943 NVMe-MI Commands & Effects Log Page: May Support 00:24:05.943 Data Area 4 for Telemetry Log: Not Supported 00:24:05.943 Error Log Page Entries Supported: 128 00:24:05.943 Keep Alive: Not Supported 00:24:05.943 00:24:05.943 NVM Command Set Attributes 00:24:05.943 ========================== 00:24:05.943 Submission Queue Entry Size 00:24:05.943 Max: 1 00:24:05.943 Min: 1 00:24:05.943 Completion Queue Entry Size 00:24:05.943 Max: 1 00:24:05.943 Min: 1 00:24:05.943 Number of Namespaces: 0 00:24:05.943 Compare Command: Not Supported 00:24:05.943 Write Uncorrectable Command: Not Supported 00:24:05.943 Dataset Management Command: Not Supported 00:24:05.943 Write Zeroes Command: Not Supported 00:24:05.943 Set Features Save Field: Not Supported 00:24:05.943 Reservations: Not Supported 00:24:05.943 Timestamp: Not Supported 00:24:05.943 Copy: Not Supported 00:24:05.943 Volatile Write Cache: Not Present 00:24:05.943 Atomic Write Unit (Normal): 1 00:24:05.943 Atomic Write Unit (PFail): 1 00:24:05.943 Atomic Compare & Write Unit: 1 00:24:05.943 Fused Compare & Write: Supported 00:24:05.943 Scatter-Gather List 00:24:05.943 SGL Command Set: Supported 00:24:05.943 SGL Keyed: Supported 00:24:05.943 SGL Bit Bucket Descriptor: Not Supported 00:24:05.943 SGL Metadata Pointer: Not Supported 00:24:05.943 Oversized SGL: Not Supported 00:24:05.943 SGL Metadata Address: Not Supported 00:24:05.943 SGL Offset: Supported 00:24:05.943 Transport SGL Data Block: Not Supported 00:24:05.943 Replay Protected Memory Block: Not Supported 00:24:05.943 00:24:05.943 Firmware Slot Information 00:24:05.943 ========================= 00:24:05.943 Active slot: 0 00:24:05.943 00:24:05.943 00:24:05.943 Error Log 00:24:05.943 ========= 00:24:05.943 00:24:05.943 Active Namespaces 00:24:05.943 ================= 00:24:05.943 Discovery Log Page 00:24:05.943 ================== 00:24:05.943 Generation Counter: 2 00:24:05.943 Number of Records: 2 00:24:05.943 Record Format: 0 00:24:05.943 00:24:05.943 Discovery Log Entry 0 00:24:05.943 ---------------------- 00:24:05.943 Transport Type: 3 (TCP) 00:24:05.943 Address Family: 1 (IPv4) 00:24:05.943 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:05.943 Entry Flags: 00:24:05.943 Duplicate Returned Information: 1 00:24:05.943 Explicit Persistent Connection Support for Discovery: 1 00:24:05.943 Transport Requirements: 00:24:05.943 Secure Channel: Not Required 00:24:05.943 Port ID: 0 (0x0000) 00:24:05.943 Controller ID: 65535 (0xffff) 00:24:05.943 Admin Max SQ Size: 128 00:24:05.943 Transport Service Identifier: 4420 00:24:05.943 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:05.943 Transport Address: 10.0.0.2 00:24:05.943 Discovery Log Entry 1 00:24:05.943 ---------------------- 00:24:05.943 Transport Type: 3 (TCP) 00:24:05.943 Address Family: 1 (IPv4) 00:24:05.943 Subsystem Type: 2 (NVM Subsystem) 00:24:05.943 Entry Flags: 00:24:05.943 Duplicate Returned Information: 0 00:24:05.943 Explicit Persistent Connection Support for Discovery: 0 00:24:05.943 Transport Requirements: 00:24:05.943 Secure Channel: Not Required 00:24:05.943 Port ID: 0 (0x0000) 00:24:05.943 Controller ID: 65535 (0xffff) 00:24:05.943 Admin Max SQ Size: 128 00:24:05.943 Transport Service Identifier: 4420 00:24:05.943 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:05.943 Transport Address: 10.0.0.2 [2024-07-15 16:15:41.534455] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:05.943 [2024-07-15 16:15:41.534466] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238de40) on tqpair=0x230aec0 00:24:05.943 [2024-07-15 16:15:41.534472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.943 [2024-07-15 16:15:41.534477] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238dfc0) on tqpair=0x230aec0 00:24:05.943 [2024-07-15 16:15:41.534482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.943 [2024-07-15 16:15:41.534487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238e140) on tqpair=0x230aec0 00:24:05.943 [2024-07-15 16:15:41.534491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.943 [2024-07-15 16:15:41.534496] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238e2c0) on tqpair=0x230aec0 00:24:05.943 [2024-07-15 16:15:41.534501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.944 [2024-07-15 16:15:41.534512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.534516] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.534519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x230aec0) 00:24:05.944 [2024-07-15 16:15:41.534526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.944 [2024-07-15 16:15:41.534540] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e2c0, cid 3, qid 0 00:24:05.944 [2024-07-15 16:15:41.534788] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.944 [2024-07-15 16:15:41.534794] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.944 [2024-07-15 16:15:41.534800] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.534803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238e2c0) on tqpair=0x230aec0 00:24:05.944 [2024-07-15 16:15:41.534810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.534814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.534818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x230aec0) 00:24:05.944 [2024-07-15 16:15:41.534824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.944 [2024-07-15 16:15:41.534837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e2c0, cid 3, qid 0 00:24:05.944 [2024-07-15 16:15:41.535086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.944 [2024-07-15 16:15:41.535092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.944 [2024-07-15 16:15:41.535096] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.535100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238e2c0) on tqpair=0x230aec0 00:24:05.944 [2024-07-15 16:15:41.535104] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:05.944 [2024-07-15 16:15:41.535109] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:05.944 [2024-07-15 16:15:41.535118] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.539128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.539133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x230aec0) 00:24:05.944 [2024-07-15 16:15:41.539140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.944 [2024-07-15 16:15:41.539152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238e2c0, cid 3, qid 0 00:24:05.944 [2024-07-15 16:15:41.539381] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.944 [2024-07-15 16:15:41.539387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.944 [2024-07-15 16:15:41.539390] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.539394] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238e2c0) on tqpair=0x230aec0 00:24:05.944 [2024-07-15 16:15:41.539402] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:24:05.944 00:24:05.944 16:15:41 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:05.944 [2024-07-15 16:15:41.583238] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:24:05.944 [2024-07-15 16:15:41.583307] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383911 ] 00:24:05.944 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.944 [2024-07-15 16:15:41.616668] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:05.944 [2024-07-15 16:15:41.616716] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:05.944 [2024-07-15 16:15:41.616721] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:05.944 [2024-07-15 16:15:41.616732] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:05.944 [2024-07-15 16:15:41.616741] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:05.944 [2024-07-15 16:15:41.620147] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:05.944 [2024-07-15 16:15:41.620172] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa85ec0 0 00:24:05.944 [2024-07-15 16:15:41.628132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:05.944 [2024-07-15 16:15:41.628143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:05.944 [2024-07-15 16:15:41.628147] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:05.944 [2024-07-15 16:15:41.628150] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:05.944 [2024-07-15 16:15:41.628180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.628186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.628189] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa85ec0) 00:24:05.944 [2024-07-15 16:15:41.628201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:05.944 [2024-07-15 16:15:41.628217] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08e40, cid 0, qid 0 00:24:05.944 [2024-07-15 16:15:41.635132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.944 [2024-07-15 16:15:41.635141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.944 [2024-07-15 16:15:41.635144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.635149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08e40) on tqpair=0xa85ec0 00:24:05.944 [2024-07-15 16:15:41.635157] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:05.944 [2024-07-15 16:15:41.635163] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:05.944 [2024-07-15 16:15:41.635168] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:05.944 [2024-07-15 16:15:41.635180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.635184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.635188] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa85ec0) 00:24:05.944 [2024-07-15 16:15:41.635195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.944 [2024-07-15 16:15:41.635208] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08e40, cid 0, qid 0 00:24:05.944 [2024-07-15 16:15:41.635427] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.944 [2024-07-15 16:15:41.635434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.944 [2024-07-15 16:15:41.635437] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.635441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08e40) on tqpair=0xa85ec0 00:24:05.944 [2024-07-15 16:15:41.635446] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:05.944 [2024-07-15 16:15:41.635453] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:05.944 [2024-07-15 16:15:41.635459] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.635463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.635466] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa85ec0) 00:24:05.944 [2024-07-15 16:15:41.635473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.944 [2024-07-15 16:15:41.635484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08e40, cid 0, qid 0 00:24:05.944 [2024-07-15 16:15:41.635697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.944 [2024-07-15 16:15:41.635703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.944 [2024-07-15 16:15:41.635707] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.635710] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08e40) on tqpair=0xa85ec0 00:24:05.944 [2024-07-15 16:15:41.635715] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:05.944 [2024-07-15 16:15:41.635723] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:05.944 [2024-07-15 16:15:41.635730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.635733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.635737] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa85ec0) 00:24:05.944 [2024-07-15 16:15:41.635743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.944 [2024-07-15 16:15:41.635753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08e40, cid 0, qid 0 00:24:05.944 [2024-07-15 16:15:41.635969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.944 [2024-07-15 16:15:41.635975] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.944 [2024-07-15 16:15:41.635978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.635982] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08e40) on tqpair=0xa85ec0 00:24:05.944 [2024-07-15 16:15:41.635987] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:05.944 [2024-07-15 16:15:41.635996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.636000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.636003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa85ec0) 00:24:05.944 [2024-07-15 16:15:41.636010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.944 [2024-07-15 16:15:41.636019] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08e40, cid 0, qid 0 00:24:05.944 [2024-07-15 16:15:41.636208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.944 [2024-07-15 16:15:41.636215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.944 [2024-07-15 16:15:41.636218] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.636222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08e40) on tqpair=0xa85ec0 00:24:05.944 [2024-07-15 16:15:41.636226] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:05.944 [2024-07-15 16:15:41.636230] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:05.944 [2024-07-15 16:15:41.636238] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:05.944 [2024-07-15 16:15:41.636343] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:05.944 [2024-07-15 16:15:41.636347] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:05.944 [2024-07-15 16:15:41.636354] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.636358] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.944 [2024-07-15 16:15:41.636362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa85ec0) 00:24:05.945 [2024-07-15 16:15:41.636368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.945 [2024-07-15 16:15:41.636381] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08e40, cid 0, qid 0 00:24:05.945 [2024-07-15 16:15:41.636591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.945 [2024-07-15 16:15:41.636597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.945 [2024-07-15 16:15:41.636601] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.636604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08e40) on tqpair=0xa85ec0 00:24:05.945 [2024-07-15 16:15:41.636609] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:05.945 [2024-07-15 16:15:41.636618] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.636621] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.636625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa85ec0) 00:24:05.945 [2024-07-15 16:15:41.636631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.945 [2024-07-15 16:15:41.636641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08e40, cid 0, qid 0 00:24:05.945 [2024-07-15 16:15:41.636828] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.945 [2024-07-15 16:15:41.636834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.945 [2024-07-15 16:15:41.636837] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.636841] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08e40) on tqpair=0xa85ec0 00:24:05.945 [2024-07-15 16:15:41.636845] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:05.945 [2024-07-15 16:15:41.636850] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:05.945 [2024-07-15 16:15:41.636857] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:05.945 [2024-07-15 16:15:41.636868] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:05.945 [2024-07-15 16:15:41.636877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.636880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa85ec0) 00:24:05.945 [2024-07-15 16:15:41.636887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.945 [2024-07-15 16:15:41.636897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08e40, cid 0, qid 0 00:24:05.945 [2024-07-15 16:15:41.637093] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.945 [2024-07-15 16:15:41.637099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.945 [2024-07-15 16:15:41.637102] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.637106] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa85ec0): datao=0, datal=4096, cccid=0 00:24:05.945 [2024-07-15 16:15:41.637111] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb08e40) on tqpair(0xa85ec0): expected_datao=0, payload_size=4096 00:24:05.945 [2024-07-15 16:15:41.637115] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.637167] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.637172] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.637451] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.945 [2024-07-15 16:15:41.637457] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.945 [2024-07-15 16:15:41.637460] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.637466] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08e40) on tqpair=0xa85ec0 00:24:05.945 [2024-07-15 16:15:41.637473] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:05.945 [2024-07-15 16:15:41.637480] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:05.945 [2024-07-15 16:15:41.637485] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:05.945 [2024-07-15 16:15:41.637489] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:05.945 [2024-07-15 16:15:41.637493] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:05.945 [2024-07-15 16:15:41.637498] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:05.945 [2024-07-15 16:15:41.637506] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:05.945 [2024-07-15 16:15:41.637512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.637516] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.637519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa85ec0) 00:24:05.945 [2024-07-15 16:15:41.637526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:05.945 [2024-07-15 16:15:41.637537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08e40, cid 0, qid 0 00:24:05.945 [2024-07-15 16:15:41.637752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.945 [2024-07-15 16:15:41.637758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.945 [2024-07-15 16:15:41.637762] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.637765] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08e40) on tqpair=0xa85ec0 00:24:05.945 [2024-07-15 16:15:41.637772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.637776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.637779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa85ec0) 00:24:05.945 [2024-07-15 16:15:41.637785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.945 [2024-07-15 16:15:41.637791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.637795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.637798] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa85ec0) 00:24:05.945 [2024-07-15 16:15:41.637804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.945 [2024-07-15 16:15:41.637810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.637814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.637817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa85ec0) 00:24:05.945 [2024-07-15 16:15:41.637823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.945 [2024-07-15 16:15:41.637828] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.637832] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.637835] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.945 [2024-07-15 16:15:41.637841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.945 [2024-07-15 16:15:41.637845] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:05.945 [2024-07-15 16:15:41.637857] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:05.945 [2024-07-15 16:15:41.637864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.637867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa85ec0) 00:24:05.945 [2024-07-15 16:15:41.637874] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.945 [2024-07-15 16:15:41.637885] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08e40, cid 0, qid 0 00:24:05.945 [2024-07-15 16:15:41.637890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08fc0, cid 1, qid 0 00:24:05.945 [2024-07-15 16:15:41.637895] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb09140, cid 2, qid 0 00:24:05.945 [2024-07-15 16:15:41.637899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.945 [2024-07-15 16:15:41.637904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb09440, cid 4, qid 0 00:24:05.945 [2024-07-15 16:15:41.638111] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.945 [2024-07-15 16:15:41.638118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.945 [2024-07-15 16:15:41.638121] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.638130] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb09440) on tqpair=0xa85ec0 00:24:05.945 [2024-07-15 16:15:41.638134] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:05.945 [2024-07-15 16:15:41.638139] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:05.945 [2024-07-15 16:15:41.638147] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:05.945 [2024-07-15 16:15:41.638153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:05.945 [2024-07-15 16:15:41.638159] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.638163] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.638166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa85ec0) 00:24:05.945 [2024-07-15 16:15:41.638173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:05.945 [2024-07-15 16:15:41.638183] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb09440, cid 4, qid 0 00:24:05.945 [2024-07-15 16:15:41.638362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.945 [2024-07-15 16:15:41.638369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.945 [2024-07-15 16:15:41.638372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.638376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb09440) on tqpair=0xa85ec0 00:24:05.945 [2024-07-15 16:15:41.638439] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:05.945 [2024-07-15 16:15:41.638447] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:05.945 [2024-07-15 16:15:41.638455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.638458] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa85ec0) 00:24:05.945 [2024-07-15 16:15:41.638465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.945 [2024-07-15 16:15:41.638477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb09440, cid 4, qid 0 00:24:05.945 [2024-07-15 16:15:41.638709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.945 [2024-07-15 16:15:41.638716] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.945 [2024-07-15 16:15:41.638719] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.945 [2024-07-15 16:15:41.638723] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa85ec0): datao=0, datal=4096, cccid=4 00:24:05.946 [2024-07-15 16:15:41.638727] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb09440) on tqpair(0xa85ec0): expected_datao=0, payload_size=4096 00:24:05.946 [2024-07-15 16:15:41.638732] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.638738] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.638742] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.682132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.946 [2024-07-15 16:15:41.682141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.946 [2024-07-15 16:15:41.682145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.682148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb09440) on tqpair=0xa85ec0 00:24:05.946 [2024-07-15 16:15:41.682158] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:05.946 [2024-07-15 16:15:41.682167] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:05.946 [2024-07-15 16:15:41.682176] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:05.946 [2024-07-15 16:15:41.682183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.682187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa85ec0) 00:24:05.946 [2024-07-15 16:15:41.682194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.946 [2024-07-15 16:15:41.682206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb09440, cid 4, qid 0 00:24:05.946 [2024-07-15 16:15:41.682401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.946 [2024-07-15 16:15:41.682407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.946 [2024-07-15 16:15:41.682411] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.682414] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa85ec0): datao=0, datal=4096, cccid=4 00:24:05.946 [2024-07-15 16:15:41.682419] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb09440) on tqpair(0xa85ec0): expected_datao=0, payload_size=4096 00:24:05.946 [2024-07-15 16:15:41.682423] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.682466] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.682470] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.723328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.946 [2024-07-15 16:15:41.723338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.946 [2024-07-15 16:15:41.723342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.723346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb09440) on tqpair=0xa85ec0 00:24:05.946 [2024-07-15 16:15:41.723359] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:05.946 [2024-07-15 16:15:41.723369] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:05.946 [2024-07-15 16:15:41.723377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.723385] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa85ec0) 00:24:05.946 [2024-07-15 16:15:41.723392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.946 [2024-07-15 16:15:41.723405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb09440, cid 4, qid 0 00:24:05.946 [2024-07-15 16:15:41.723611] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.946 [2024-07-15 16:15:41.723617] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.946 [2024-07-15 16:15:41.723621] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.723624] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa85ec0): datao=0, datal=4096, cccid=4 00:24:05.946 [2024-07-15 16:15:41.723629] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb09440) on tqpair(0xa85ec0): expected_datao=0, payload_size=4096 00:24:05.946 [2024-07-15 16:15:41.723633] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.723675] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.723679] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.764337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.946 [2024-07-15 16:15:41.764347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.946 [2024-07-15 16:15:41.764350] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.764354] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb09440) on tqpair=0xa85ec0 00:24:05.946 [2024-07-15 16:15:41.764362] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:05.946 [2024-07-15 16:15:41.764370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:05.946 [2024-07-15 16:15:41.764379] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:05.946 [2024-07-15 16:15:41.764385] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:05.946 [2024-07-15 16:15:41.764390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:05.946 [2024-07-15 16:15:41.764395] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:05.946 [2024-07-15 16:15:41.764400] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:05.946 [2024-07-15 16:15:41.764404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:05.946 [2024-07-15 16:15:41.764409] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:05.946 [2024-07-15 16:15:41.764422] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.764426] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa85ec0) 00:24:05.946 [2024-07-15 16:15:41.764433] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.946 [2024-07-15 16:15:41.764440] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.764444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.764447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa85ec0) 00:24:05.946 [2024-07-15 16:15:41.764453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.946 [2024-07-15 16:15:41.764468] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb09440, cid 4, qid 0 00:24:05.946 [2024-07-15 16:15:41.764475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb095c0, cid 5, qid 0 00:24:05.946 [2024-07-15 16:15:41.764621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.946 [2024-07-15 16:15:41.764627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.946 [2024-07-15 16:15:41.764631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.764634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb09440) on tqpair=0xa85ec0 00:24:05.946 [2024-07-15 16:15:41.764641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.946 [2024-07-15 16:15:41.764647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.946 [2024-07-15 16:15:41.764650] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.764654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb095c0) on tqpair=0xa85ec0 00:24:05.946 [2024-07-15 16:15:41.764663] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.764666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa85ec0) 00:24:05.946 [2024-07-15 16:15:41.764673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.946 [2024-07-15 16:15:41.764683] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb095c0, cid 5, qid 0 00:24:05.946 [2024-07-15 16:15:41.764858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.946 [2024-07-15 16:15:41.764864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.946 [2024-07-15 16:15:41.764867] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.764871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb095c0) on tqpair=0xa85ec0 00:24:05.946 [2024-07-15 16:15:41.764879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.764883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa85ec0) 00:24:05.946 [2024-07-15 16:15:41.764889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.946 [2024-07-15 16:15:41.764899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb095c0, cid 5, qid 0 00:24:05.946 [2024-07-15 16:15:41.765072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.946 [2024-07-15 16:15:41.765078] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.946 [2024-07-15 16:15:41.765081] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.765085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb095c0) on tqpair=0xa85ec0 00:24:05.946 [2024-07-15 16:15:41.765094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.765098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa85ec0) 00:24:05.946 [2024-07-15 16:15:41.765104] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.946 [2024-07-15 16:15:41.765113] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb095c0, cid 5, qid 0 00:24:05.946 [2024-07-15 16:15:41.765323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.946 [2024-07-15 16:15:41.765330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.946 [2024-07-15 16:15:41.765333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.765337] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb095c0) on tqpair=0xa85ec0 00:24:05.946 [2024-07-15 16:15:41.765351] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.765355] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa85ec0) 00:24:05.946 [2024-07-15 16:15:41.765362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.946 [2024-07-15 16:15:41.765371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.765374] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa85ec0) 00:24:05.946 [2024-07-15 16:15:41.765381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.946 [2024-07-15 16:15:41.765388] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.765391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xa85ec0) 00:24:05.946 [2024-07-15 16:15:41.765397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.946 [2024-07-15 16:15:41.765405] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.946 [2024-07-15 16:15:41.765408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa85ec0) 00:24:05.947 [2024-07-15 16:15:41.765414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.947 [2024-07-15 16:15:41.765426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb095c0, cid 5, qid 0 00:24:05.947 [2024-07-15 16:15:41.765431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb09440, cid 4, qid 0 00:24:05.947 [2024-07-15 16:15:41.765435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb09740, cid 6, qid 0 00:24:05.947 [2024-07-15 16:15:41.765440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb098c0, cid 7, qid 0 00:24:05.947 [2024-07-15 16:15:41.765713] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.947 [2024-07-15 16:15:41.765720] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.947 [2024-07-15 16:15:41.765723] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.765727] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa85ec0): datao=0, datal=8192, cccid=5 00:24:05.947 [2024-07-15 16:15:41.765731] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb095c0) on tqpair(0xa85ec0): expected_datao=0, payload_size=8192 00:24:05.947 [2024-07-15 16:15:41.765735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.765940] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.765944] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.765949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.947 [2024-07-15 16:15:41.765955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.947 [2024-07-15 16:15:41.765958] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.765962] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa85ec0): datao=0, datal=512, cccid=4 00:24:05.947 [2024-07-15 16:15:41.765966] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb09440) on tqpair(0xa85ec0): expected_datao=0, payload_size=512 00:24:05.947 [2024-07-15 16:15:41.765970] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.765977] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.765980] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.765985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.947 [2024-07-15 16:15:41.765991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.947 [2024-07-15 16:15:41.765994] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.765997] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa85ec0): datao=0, datal=512, cccid=6 00:24:05.947 [2024-07-15 16:15:41.766002] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb09740) on tqpair(0xa85ec0): expected_datao=0, payload_size=512 00:24:05.947 [2024-07-15 16:15:41.766008] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.766014] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.766017] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.766023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.947 [2024-07-15 16:15:41.766029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.947 [2024-07-15 16:15:41.766032] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.766035] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa85ec0): datao=0, datal=4096, cccid=7 00:24:05.947 [2024-07-15 16:15:41.766039] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb098c0) on tqpair(0xa85ec0): expected_datao=0, payload_size=4096 00:24:05.947 [2024-07-15 16:15:41.766044] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.766050] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.766053] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.770129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.947 [2024-07-15 16:15:41.770138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.947 [2024-07-15 16:15:41.770141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.770145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb095c0) on tqpair=0xa85ec0 00:24:05.947 [2024-07-15 16:15:41.770158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.947 [2024-07-15 16:15:41.770164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.947 [2024-07-15 16:15:41.770167] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.770171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb09440) on tqpair=0xa85ec0 00:24:05.947 [2024-07-15 16:15:41.770181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.947 [2024-07-15 16:15:41.770187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.947 [2024-07-15 16:15:41.770190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.770194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb09740) on tqpair=0xa85ec0 00:24:05.947 [2024-07-15 16:15:41.770201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.947 [2024-07-15 16:15:41.770206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.947 [2024-07-15 16:15:41.770209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.947 [2024-07-15 16:15:41.770213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb098c0) on tqpair=0xa85ec0 00:24:05.947 ===================================================== 00:24:05.947 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:05.947 ===================================================== 00:24:05.947 Controller Capabilities/Features 00:24:05.947 ================================ 00:24:05.947 Vendor ID: 8086 00:24:05.947 Subsystem Vendor ID: 8086 00:24:05.947 Serial Number: SPDK00000000000001 00:24:05.947 Model Number: SPDK bdev Controller 00:24:05.947 Firmware Version: 24.09 00:24:05.947 Recommended Arb Burst: 6 00:24:05.947 IEEE OUI Identifier: e4 d2 5c 00:24:05.947 Multi-path I/O 00:24:05.947 May have multiple subsystem ports: Yes 00:24:05.947 May have multiple controllers: Yes 00:24:05.947 Associated with SR-IOV VF: No 00:24:05.947 Max Data Transfer Size: 131072 00:24:05.947 Max Number of Namespaces: 32 00:24:05.947 Max Number of I/O Queues: 127 00:24:05.947 NVMe Specification Version (VS): 1.3 00:24:05.947 NVMe Specification Version (Identify): 1.3 00:24:05.947 Maximum Queue Entries: 128 00:24:05.947 Contiguous Queues Required: Yes 00:24:05.947 Arbitration Mechanisms Supported 00:24:05.947 Weighted Round Robin: Not Supported 00:24:05.947 Vendor Specific: Not Supported 00:24:05.947 Reset Timeout: 15000 ms 00:24:05.947 Doorbell Stride: 4 bytes 00:24:05.947 NVM Subsystem Reset: Not Supported 00:24:05.947 Command Sets Supported 00:24:05.947 NVM Command Set: Supported 00:24:05.947 Boot Partition: Not Supported 00:24:05.947 Memory Page Size Minimum: 4096 bytes 00:24:05.947 Memory Page Size Maximum: 4096 bytes 00:24:05.947 Persistent Memory Region: Not Supported 00:24:05.947 Optional Asynchronous Events Supported 00:24:05.947 Namespace Attribute Notices: Supported 00:24:05.947 Firmware Activation Notices: Not Supported 00:24:05.947 ANA Change Notices: Not Supported 00:24:05.947 PLE Aggregate Log Change Notices: Not Supported 00:24:05.947 LBA Status Info Alert Notices: Not Supported 00:24:05.947 EGE Aggregate Log Change Notices: Not Supported 00:24:05.947 Normal NVM Subsystem Shutdown event: Not Supported 00:24:05.947 Zone Descriptor Change Notices: Not Supported 00:24:05.947 Discovery Log Change Notices: Not Supported 00:24:05.947 Controller Attributes 00:24:05.947 128-bit Host Identifier: Supported 00:24:05.947 Non-Operational Permissive Mode: Not Supported 00:24:05.947 NVM Sets: Not Supported 00:24:05.947 Read Recovery Levels: Not Supported 00:24:05.947 Endurance Groups: Not Supported 00:24:05.947 Predictable Latency Mode: Not Supported 00:24:05.947 Traffic Based Keep ALive: Not Supported 00:24:05.947 Namespace Granularity: Not Supported 00:24:05.947 SQ Associations: Not Supported 00:24:05.947 UUID List: Not Supported 00:24:05.947 Multi-Domain Subsystem: Not Supported 00:24:05.947 Fixed Capacity Management: Not Supported 00:24:05.947 Variable Capacity Management: Not Supported 00:24:05.947 Delete Endurance Group: Not Supported 00:24:05.947 Delete NVM Set: Not Supported 00:24:05.947 Extended LBA Formats Supported: Not Supported 00:24:05.947 Flexible Data Placement Supported: Not Supported 00:24:05.947 00:24:05.947 Controller Memory Buffer Support 00:24:05.947 ================================ 00:24:05.947 Supported: No 00:24:05.947 00:24:05.947 Persistent Memory Region Support 00:24:05.947 ================================ 00:24:05.947 Supported: No 00:24:05.947 00:24:05.947 Admin Command Set Attributes 00:24:05.948 ============================ 00:24:05.948 Security Send/Receive: Not Supported 00:24:05.948 Format NVM: Not Supported 00:24:05.948 Firmware Activate/Download: Not Supported 00:24:05.948 Namespace Management: Not Supported 00:24:05.948 Device Self-Test: Not Supported 00:24:05.948 Directives: Not Supported 00:24:05.948 NVMe-MI: Not Supported 00:24:05.948 Virtualization Management: Not Supported 00:24:05.948 Doorbell Buffer Config: Not Supported 00:24:05.948 Get LBA Status Capability: Not Supported 00:24:05.948 Command & Feature Lockdown Capability: Not Supported 00:24:05.948 Abort Command Limit: 4 00:24:05.948 Async Event Request Limit: 4 00:24:05.948 Number of Firmware Slots: N/A 00:24:05.948 Firmware Slot 1 Read-Only: N/A 00:24:05.948 Firmware Activation Without Reset: N/A 00:24:05.948 Multiple Update Detection Support: N/A 00:24:05.948 Firmware Update Granularity: No Information Provided 00:24:05.948 Per-Namespace SMART Log: No 00:24:05.948 Asymmetric Namespace Access Log Page: Not Supported 00:24:05.948 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:05.948 Command Effects Log Page: Supported 00:24:05.948 Get Log Page Extended Data: Supported 00:24:05.948 Telemetry Log Pages: Not Supported 00:24:05.948 Persistent Event Log Pages: Not Supported 00:24:05.948 Supported Log Pages Log Page: May Support 00:24:05.948 Commands Supported & Effects Log Page: Not Supported 00:24:05.948 Feature Identifiers & Effects Log Page:May Support 00:24:05.948 NVMe-MI Commands & Effects Log Page: May Support 00:24:05.948 Data Area 4 for Telemetry Log: Not Supported 00:24:05.948 Error Log Page Entries Supported: 128 00:24:05.948 Keep Alive: Supported 00:24:05.948 Keep Alive Granularity: 10000 ms 00:24:05.948 00:24:05.948 NVM Command Set Attributes 00:24:05.948 ========================== 00:24:05.948 Submission Queue Entry Size 00:24:05.948 Max: 64 00:24:05.948 Min: 64 00:24:05.948 Completion Queue Entry Size 00:24:05.948 Max: 16 00:24:05.948 Min: 16 00:24:05.948 Number of Namespaces: 32 00:24:05.948 Compare Command: Supported 00:24:05.948 Write Uncorrectable Command: Not Supported 00:24:05.948 Dataset Management Command: Supported 00:24:05.948 Write Zeroes Command: Supported 00:24:05.948 Set Features Save Field: Not Supported 00:24:05.948 Reservations: Supported 00:24:05.948 Timestamp: Not Supported 00:24:05.948 Copy: Supported 00:24:05.948 Volatile Write Cache: Present 00:24:05.948 Atomic Write Unit (Normal): 1 00:24:05.948 Atomic Write Unit (PFail): 1 00:24:05.948 Atomic Compare & Write Unit: 1 00:24:05.948 Fused Compare & Write: Supported 00:24:05.948 Scatter-Gather List 00:24:05.948 SGL Command Set: Supported 00:24:05.948 SGL Keyed: Supported 00:24:05.948 SGL Bit Bucket Descriptor: Not Supported 00:24:05.948 SGL Metadata Pointer: Not Supported 00:24:05.948 Oversized SGL: Not Supported 00:24:05.948 SGL Metadata Address: Not Supported 00:24:05.948 SGL Offset: Supported 00:24:05.948 Transport SGL Data Block: Not Supported 00:24:05.948 Replay Protected Memory Block: Not Supported 00:24:05.948 00:24:05.948 Firmware Slot Information 00:24:05.948 ========================= 00:24:05.948 Active slot: 1 00:24:05.948 Slot 1 Firmware Revision: 24.09 00:24:05.948 00:24:05.948 00:24:05.948 Commands Supported and Effects 00:24:05.948 ============================== 00:24:05.948 Admin Commands 00:24:05.948 -------------- 00:24:05.948 Get Log Page (02h): Supported 00:24:05.948 Identify (06h): Supported 00:24:05.948 Abort (08h): Supported 00:24:05.948 Set Features (09h): Supported 00:24:05.948 Get Features (0Ah): Supported 00:24:05.948 Asynchronous Event Request (0Ch): Supported 00:24:05.948 Keep Alive (18h): Supported 00:24:05.948 I/O Commands 00:24:05.948 ------------ 00:24:05.948 Flush (00h): Supported LBA-Change 00:24:05.948 Write (01h): Supported LBA-Change 00:24:05.948 Read (02h): Supported 00:24:05.948 Compare (05h): Supported 00:24:05.948 Write Zeroes (08h): Supported LBA-Change 00:24:05.948 Dataset Management (09h): Supported LBA-Change 00:24:05.948 Copy (19h): Supported LBA-Change 00:24:05.948 00:24:05.948 Error Log 00:24:05.948 ========= 00:24:05.948 00:24:05.948 Arbitration 00:24:05.948 =========== 00:24:05.948 Arbitration Burst: 1 00:24:05.948 00:24:05.948 Power Management 00:24:05.948 ================ 00:24:05.948 Number of Power States: 1 00:24:05.948 Current Power State: Power State #0 00:24:05.948 Power State #0: 00:24:05.948 Max Power: 0.00 W 00:24:05.948 Non-Operational State: Operational 00:24:05.948 Entry Latency: Not Reported 00:24:05.948 Exit Latency: Not Reported 00:24:05.948 Relative Read Throughput: 0 00:24:05.948 Relative Read Latency: 0 00:24:05.948 Relative Write Throughput: 0 00:24:05.948 Relative Write Latency: 0 00:24:05.948 Idle Power: Not Reported 00:24:05.948 Active Power: Not Reported 00:24:05.948 Non-Operational Permissive Mode: Not Supported 00:24:05.948 00:24:05.948 Health Information 00:24:05.948 ================== 00:24:05.948 Critical Warnings: 00:24:05.948 Available Spare Space: OK 00:24:05.948 Temperature: OK 00:24:05.948 Device Reliability: OK 00:24:05.948 Read Only: No 00:24:05.948 Volatile Memory Backup: OK 00:24:05.948 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:05.948 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:05.948 Available Spare: 0% 00:24:05.948 Available Spare Threshold: 0% 00:24:05.948 Life Percentage Used:[2024-07-15 16:15:41.770314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.948 [2024-07-15 16:15:41.770319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa85ec0) 00:24:05.948 [2024-07-15 16:15:41.770326] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.948 [2024-07-15 16:15:41.770340] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb098c0, cid 7, qid 0 00:24:05.948 [2024-07-15 16:15:41.770589] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.948 [2024-07-15 16:15:41.770595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.948 [2024-07-15 16:15:41.770598] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.948 [2024-07-15 16:15:41.770602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb098c0) on tqpair=0xa85ec0 00:24:05.948 [2024-07-15 16:15:41.770632] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:05.948 [2024-07-15 16:15:41.770642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08e40) on tqpair=0xa85ec0 00:24:05.948 [2024-07-15 16:15:41.770648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.948 [2024-07-15 16:15:41.770655] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08fc0) on tqpair=0xa85ec0 00:24:05.948 [2024-07-15 16:15:41.770659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.948 [2024-07-15 16:15:41.770664] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb09140) on tqpair=0xa85ec0 00:24:05.948 [2024-07-15 16:15:41.770669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.948 [2024-07-15 16:15:41.770674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.948 [2024-07-15 16:15:41.770678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.948 [2024-07-15 16:15:41.770686] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.948 [2024-07-15 16:15:41.770690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.948 [2024-07-15 16:15:41.770693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.948 [2024-07-15 16:15:41.770700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.948 [2024-07-15 16:15:41.770713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.948 [2024-07-15 16:15:41.770958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.948 [2024-07-15 16:15:41.770965] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.948 [2024-07-15 16:15:41.770968] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.948 [2024-07-15 16:15:41.770972] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.948 [2024-07-15 16:15:41.770978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.948 [2024-07-15 16:15:41.770982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.948 [2024-07-15 16:15:41.770985] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.948 [2024-07-15 16:15:41.770992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.948 [2024-07-15 16:15:41.771005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.948 [2024-07-15 16:15:41.771204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.948 [2024-07-15 16:15:41.771211] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.948 [2024-07-15 16:15:41.771215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.948 [2024-07-15 16:15:41.771219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.948 [2024-07-15 16:15:41.771223] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:05.948 [2024-07-15 16:15:41.771228] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:05.948 [2024-07-15 16:15:41.771237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.948 [2024-07-15 16:15:41.771241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.948 [2024-07-15 16:15:41.771244] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.948 [2024-07-15 16:15:41.771251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.948 [2024-07-15 16:15:41.771261] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.948 [2024-07-15 16:15:41.771460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.948 [2024-07-15 16:15:41.771466] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.948 [2024-07-15 16:15:41.771469] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.948 [2024-07-15 16:15:41.771473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.949 [2024-07-15 16:15:41.771484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.771489] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.771492] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.949 [2024-07-15 16:15:41.771499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.949 [2024-07-15 16:15:41.771508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.949 [2024-07-15 16:15:41.771714] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.949 [2024-07-15 16:15:41.771720] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.949 [2024-07-15 16:15:41.771723] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.771727] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.949 [2024-07-15 16:15:41.771736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.771740] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.771744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.949 [2024-07-15 16:15:41.771750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.949 [2024-07-15 16:15:41.771760] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.949 [2024-07-15 16:15:41.771965] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.949 [2024-07-15 16:15:41.771971] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.949 [2024-07-15 16:15:41.771975] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.771979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.949 [2024-07-15 16:15:41.771988] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.771992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.771995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.949 [2024-07-15 16:15:41.772002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.949 [2024-07-15 16:15:41.772011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.949 [2024-07-15 16:15:41.772199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.949 [2024-07-15 16:15:41.772205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.949 [2024-07-15 16:15:41.772209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.772213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.949 [2024-07-15 16:15:41.772222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.772226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.772229] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.949 [2024-07-15 16:15:41.772236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.949 [2024-07-15 16:15:41.772246] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.949 [2024-07-15 16:15:41.772469] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.949 [2024-07-15 16:15:41.772475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.949 [2024-07-15 16:15:41.772479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.772483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.949 [2024-07-15 16:15:41.772492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.772498] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.772501] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.949 [2024-07-15 16:15:41.772508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.949 [2024-07-15 16:15:41.772517] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.949 [2024-07-15 16:15:41.772720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.949 [2024-07-15 16:15:41.772727] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.949 [2024-07-15 16:15:41.772730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.772734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.949 [2024-07-15 16:15:41.772743] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.772747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.772750] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.949 [2024-07-15 16:15:41.772757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.949 [2024-07-15 16:15:41.772766] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.949 [2024-07-15 16:15:41.772972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.949 [2024-07-15 16:15:41.772978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.949 [2024-07-15 16:15:41.772982] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.772986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.949 [2024-07-15 16:15:41.772995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.772999] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.773002] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.949 [2024-07-15 16:15:41.773009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.949 [2024-07-15 16:15:41.773019] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.949 [2024-07-15 16:15:41.773222] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.949 [2024-07-15 16:15:41.773228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.949 [2024-07-15 16:15:41.773232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.773236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.949 [2024-07-15 16:15:41.773245] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.773249] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.773252] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.949 [2024-07-15 16:15:41.773259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.949 [2024-07-15 16:15:41.773269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.949 [2024-07-15 16:15:41.773477] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.949 [2024-07-15 16:15:41.773484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.949 [2024-07-15 16:15:41.773487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.773491] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.949 [2024-07-15 16:15:41.773500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.773504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.773509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.949 [2024-07-15 16:15:41.773516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.949 [2024-07-15 16:15:41.773525] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.949 [2024-07-15 16:15:41.773728] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.949 [2024-07-15 16:15:41.773734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.949 [2024-07-15 16:15:41.773738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.773741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.949 [2024-07-15 16:15:41.773750] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.773754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.773757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.949 [2024-07-15 16:15:41.773764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.949 [2024-07-15 16:15:41.773773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.949 [2024-07-15 16:15:41.773982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.949 [2024-07-15 16:15:41.773988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.949 [2024-07-15 16:15:41.773991] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.773995] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.949 [2024-07-15 16:15:41.774004] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.774008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.774012] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.949 [2024-07-15 16:15:41.774018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.949 [2024-07-15 16:15:41.774028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.949 [2024-07-15 16:15:41.774241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.949 [2024-07-15 16:15:41.774248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.949 [2024-07-15 16:15:41.774251] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.774255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.949 [2024-07-15 16:15:41.774264] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.774268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.774272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.949 [2024-07-15 16:15:41.774278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.949 [2024-07-15 16:15:41.774288] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.949 [2024-07-15 16:15:41.774534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.949 [2024-07-15 16:15:41.774540] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.949 [2024-07-15 16:15:41.774544] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.774547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.949 [2024-07-15 16:15:41.774557] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.774560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.949 [2024-07-15 16:15:41.774564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.949 [2024-07-15 16:15:41.774572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.949 [2024-07-15 16:15:41.774582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.949 [2024-07-15 16:15:41.774788] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.949 [2024-07-15 16:15:41.774794] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.949 [2024-07-15 16:15:41.774797] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.774801] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.950 [2024-07-15 16:15:41.774810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.774814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.774818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.950 [2024-07-15 16:15:41.774824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.950 [2024-07-15 16:15:41.774834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.950 [2024-07-15 16:15:41.775038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.950 [2024-07-15 16:15:41.775044] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.950 [2024-07-15 16:15:41.775048] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.775051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.950 [2024-07-15 16:15:41.775061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.775065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.775068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.950 [2024-07-15 16:15:41.775075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.950 [2024-07-15 16:15:41.775084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.950 [2024-07-15 16:15:41.775286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.950 [2024-07-15 16:15:41.775293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.950 [2024-07-15 16:15:41.775296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.775300] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.950 [2024-07-15 16:15:41.775310] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.775314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.775317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.950 [2024-07-15 16:15:41.775324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.950 [2024-07-15 16:15:41.775333] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.950 [2024-07-15 16:15:41.775593] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.950 [2024-07-15 16:15:41.775599] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.950 [2024-07-15 16:15:41.775603] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.775606] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.950 [2024-07-15 16:15:41.775616] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.775619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.775623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.950 [2024-07-15 16:15:41.775629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.950 [2024-07-15 16:15:41.775641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.950 [2024-07-15 16:15:41.775847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.950 [2024-07-15 16:15:41.775853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.950 [2024-07-15 16:15:41.775856] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.775860] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.950 [2024-07-15 16:15:41.775869] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.775873] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.775876] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.950 [2024-07-15 16:15:41.775883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.950 [2024-07-15 16:15:41.775892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.950 [2024-07-15 16:15:41.776098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.950 [2024-07-15 16:15:41.776105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.950 [2024-07-15 16:15:41.776108] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.776112] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.950 [2024-07-15 16:15:41.776126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.776130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.776133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.950 [2024-07-15 16:15:41.776140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.950 [2024-07-15 16:15:41.776150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.950 [2024-07-15 16:15:41.776362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.950 [2024-07-15 16:15:41.776368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.950 [2024-07-15 16:15:41.776371] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.776375] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.950 [2024-07-15 16:15:41.776385] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.776389] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.776392] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.950 [2024-07-15 16:15:41.776399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.950 [2024-07-15 16:15:41.776408] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.950 [2024-07-15 16:15:41.776650] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.950 [2024-07-15 16:15:41.776657] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.950 [2024-07-15 16:15:41.776660] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.776664] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.950 [2024-07-15 16:15:41.776673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.776677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.776680] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.950 [2024-07-15 16:15:41.776687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.950 [2024-07-15 16:15:41.776696] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:05.950 [2024-07-15 16:15:41.776955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.950 [2024-07-15 16:15:41.776962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.950 [2024-07-15 16:15:41.776965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.776969] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:05.950 [2024-07-15 16:15:41.776978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.776982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.950 [2024-07-15 16:15:41.776986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa85ec0) 00:24:05.950 [2024-07-15 16:15:41.776992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.950 [2024-07-15 16:15:41.777002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb092c0, cid 3, qid 0 00:24:06.211 [2024-07-15 16:15:41.781132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:06.211 [2024-07-15 16:15:41.781141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:06.211 [2024-07-15 16:15:41.781144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:06.211 [2024-07-15 16:15:41.781148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb092c0) on tqpair=0xa85ec0 00:24:06.211 [2024-07-15 16:15:41.781156] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 9 milliseconds 00:24:06.211 0% 00:24:06.211 Data Units Read: 0 00:24:06.211 Data Units Written: 0 00:24:06.211 Host Read Commands: 0 00:24:06.211 Host Write Commands: 0 00:24:06.211 Controller Busy Time: 0 minutes 00:24:06.211 Power Cycles: 0 00:24:06.211 Power On Hours: 0 hours 00:24:06.211 Unsafe Shutdowns: 0 00:24:06.211 Unrecoverable Media Errors: 0 00:24:06.211 Lifetime Error Log Entries: 0 00:24:06.211 Warning Temperature Time: 0 minutes 00:24:06.211 Critical Temperature Time: 0 minutes 00:24:06.211 00:24:06.211 Number of Queues 00:24:06.211 ================ 00:24:06.211 Number of I/O Submission Queues: 127 00:24:06.211 Number of I/O Completion Queues: 127 00:24:06.211 00:24:06.211 Active Namespaces 00:24:06.211 ================= 00:24:06.211 Namespace ID:1 00:24:06.211 Error Recovery Timeout: Unlimited 00:24:06.211 Command Set Identifier: NVM (00h) 00:24:06.211 Deallocate: Supported 00:24:06.211 Deallocated/Unwritten Error: Not Supported 00:24:06.211 Deallocated Read Value: Unknown 00:24:06.211 Deallocate in Write Zeroes: Not Supported 00:24:06.211 Deallocated Guard Field: 0xFFFF 00:24:06.211 Flush: Supported 00:24:06.211 Reservation: Supported 00:24:06.211 Namespace Sharing Capabilities: Multiple Controllers 00:24:06.211 Size (in LBAs): 131072 (0GiB) 00:24:06.211 Capacity (in LBAs): 131072 (0GiB) 00:24:06.211 Utilization (in LBAs): 131072 (0GiB) 00:24:06.211 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:06.211 EUI64: ABCDEF0123456789 00:24:06.211 UUID: 997e0883-9575-455f-b60a-8dffc0dfa446 00:24:06.211 Thin Provisioning: Not Supported 00:24:06.211 Per-NS Atomic Units: Yes 00:24:06.211 Atomic Boundary Size (Normal): 0 00:24:06.211 Atomic Boundary Size (PFail): 0 00:24:06.211 Atomic Boundary Offset: 0 00:24:06.211 Maximum Single Source Range Length: 65535 00:24:06.211 Maximum Copy Length: 65535 00:24:06.211 Maximum Source Range Count: 1 00:24:06.211 NGUID/EUI64 Never Reused: No 00:24:06.211 Namespace Write Protected: No 00:24:06.211 Number of LBA Formats: 1 00:24:06.211 Current LBA Format: LBA Format #00 00:24:06.211 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:06.211 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:06.211 rmmod nvme_tcp 00:24:06.211 rmmod nvme_fabrics 00:24:06.211 rmmod nvme_keyring 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2383576 ']' 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2383576 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2383576 ']' 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2383576 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2383576 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2383576' 00:24:06.211 killing process with pid 2383576 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2383576 00:24:06.211 16:15:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2383576 00:24:06.472 16:15:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:06.472 16:15:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:06.472 16:15:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:06.472 16:15:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:06.472 16:15:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:06.472 16:15:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.472 16:15:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.472 16:15:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.402 16:15:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:08.402 00:24:08.402 real 0m10.657s 00:24:08.402 user 0m8.003s 00:24:08.402 sys 0m5.396s 00:24:08.402 16:15:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:08.402 16:15:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:08.402 ************************************ 00:24:08.402 END TEST nvmf_identify 00:24:08.402 ************************************ 00:24:08.402 16:15:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:08.402 16:15:44 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:08.403 16:15:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:08.403 16:15:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:08.403 16:15:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:08.403 ************************************ 00:24:08.403 START TEST nvmf_perf 00:24:08.403 ************************************ 00:24:08.403 16:15:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:08.664 * Looking for test storage... 00:24:08.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:08.664 16:15:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.250 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:15.251 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:15.251 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:15.251 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:15.251 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:15.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:24:15.251 00:24:15.251 --- 10.0.0.2 ping statistics --- 00:24:15.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.251 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:24:15.251 00:24:15.251 --- 10.0.0.1 ping statistics --- 00:24:15.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.251 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2387904 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2387904 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2387904 ']' 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:15.251 16:15:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:15.251 [2024-07-15 16:15:51.032956] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:24:15.251 [2024-07-15 16:15:51.033010] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.251 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.510 [2024-07-15 16:15:51.100897] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:15.510 [2024-07-15 16:15:51.168350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.510 [2024-07-15 16:15:51.168403] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.510 [2024-07-15 16:15:51.168411] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.510 [2024-07-15 16:15:51.168418] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.510 [2024-07-15 16:15:51.168423] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.510 [2024-07-15 16:15:51.168491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.510 [2024-07-15 16:15:51.168625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.510 [2024-07-15 16:15:51.168781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.510 [2024-07-15 16:15:51.168783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.078 16:15:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:16.078 16:15:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:16.078 16:15:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:16.078 16:15:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:16.078 16:15:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:16.078 16:15:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.078 16:15:51 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:16.078 16:15:51 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:16.647 16:15:52 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:16.647 16:15:52 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:16.907 16:15:52 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:16.907 16:15:52 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:16.907 16:15:52 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:16.907 16:15:52 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:16.907 16:15:52 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:16.907 16:15:52 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:16.907 16:15:52 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:17.167 [2024-07-15 16:15:52.809379] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.167 16:15:52 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:17.167 16:15:53 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:17.167 16:15:53 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:17.427 16:15:53 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:17.427 16:15:53 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:17.685 16:15:53 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.685 [2024-07-15 16:15:53.463773] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.685 16:15:53 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:17.944 16:15:53 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:17.944 16:15:53 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:17.944 16:15:53 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:17.944 16:15:53 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:19.326 Initializing NVMe Controllers 00:24:19.326 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:19.326 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:19.326 Initialization complete. Launching workers. 00:24:19.326 ======================================================== 00:24:19.326 Latency(us) 00:24:19.326 Device Information : IOPS MiB/s Average min max 00:24:19.326 PCIE (0000:65:00.0) NSID 1 from core 0: 79507.40 310.58 401.98 13.18 6202.02 00:24:19.326 ======================================================== 00:24:19.326 Total : 79507.40 310.58 401.98 13.18 6202.02 00:24:19.326 00:24:19.326 16:15:54 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:19.326 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.708 Initializing NVMe Controllers 00:24:20.708 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:20.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:20.708 Initialization complete. Launching workers. 00:24:20.708 ======================================================== 00:24:20.709 Latency(us) 00:24:20.709 Device Information : IOPS MiB/s Average min max 00:24:20.709 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 118.57 0.46 8587.97 349.97 45986.79 00:24:20.709 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.79 0.22 17747.14 4962.41 47904.97 00:24:20.709 ======================================================== 00:24:20.709 Total : 175.36 0.69 11554.30 349.97 47904.97 00:24:20.709 00:24:20.709 16:15:56 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:20.709 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.091 Initializing NVMe Controllers 00:24:22.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:22.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:22.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:22.091 Initialization complete. Launching workers. 00:24:22.091 ======================================================== 00:24:22.091 Latency(us) 00:24:22.091 Device Information : IOPS MiB/s Average min max 00:24:22.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9852.90 38.49 3267.00 517.79 44625.52 00:24:22.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3800.96 14.85 8456.91 4772.73 16247.21 00:24:22.091 ======================================================== 00:24:22.092 Total : 13653.86 53.34 4711.76 517.79 44625.52 00:24:22.092 00:24:22.092 16:15:57 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:22.092 16:15:57 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:22.092 16:15:57 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:22.092 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.635 Initializing NVMe Controllers 00:24:24.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:24.635 Controller IO queue size 128, less than required. 00:24:24.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.635 Controller IO queue size 128, less than required. 00:24:24.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:24.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:24.635 Initialization complete. Launching workers. 00:24:24.635 ======================================================== 00:24:24.635 Latency(us) 00:24:24.635 Device Information : IOPS MiB/s Average min max 00:24:24.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 908.24 227.06 145594.63 86354.01 224897.40 00:24:24.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 601.83 150.46 226216.05 69356.96 360206.08 00:24:24.635 ======================================================== 00:24:24.635 Total : 1510.08 377.52 177725.77 69356.96 360206.08 00:24:24.635 00:24:24.635 16:16:00 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:24.635 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.635 No valid NVMe controllers or AIO or URING devices found 00:24:24.635 Initializing NVMe Controllers 00:24:24.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:24.635 Controller IO queue size 128, less than required. 00:24:24.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.635 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:24.635 Controller IO queue size 128, less than required. 00:24:24.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.635 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:24.635 WARNING: Some requested NVMe devices were skipped 00:24:24.635 16:16:00 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:24.896 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.442 Initializing NVMe Controllers 00:24:27.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:27.442 Controller IO queue size 128, less than required. 00:24:27.442 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:27.442 Controller IO queue size 128, less than required. 00:24:27.442 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:27.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:27.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:27.442 Initialization complete. Launching workers. 00:24:27.442 00:24:27.442 ==================== 00:24:27.442 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:27.442 TCP transport: 00:24:27.442 polls: 38673 00:24:27.442 idle_polls: 12149 00:24:27.442 sock_completions: 26524 00:24:27.442 nvme_completions: 3807 00:24:27.442 submitted_requests: 5752 00:24:27.443 queued_requests: 1 00:24:27.443 00:24:27.443 ==================== 00:24:27.443 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:27.443 TCP transport: 00:24:27.443 polls: 40976 00:24:27.443 idle_polls: 13993 00:24:27.443 sock_completions: 26983 00:24:27.443 nvme_completions: 3853 00:24:27.443 submitted_requests: 5776 00:24:27.443 queued_requests: 1 00:24:27.443 ======================================================== 00:24:27.443 Latency(us) 00:24:27.443 Device Information : IOPS MiB/s Average min max 00:24:27.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 950.36 237.59 138520.59 69457.63 211554.67 00:24:27.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 961.85 240.46 138443.51 81130.08 219511.98 00:24:27.443 ======================================================== 00:24:27.443 Total : 1912.21 478.05 138481.82 69457.63 219511.98 00:24:27.443 00:24:27.443 16:16:02 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:27.443 16:16:02 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:27.443 rmmod nvme_tcp 00:24:27.443 rmmod nvme_fabrics 00:24:27.443 rmmod nvme_keyring 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2387904 ']' 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2387904 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2387904 ']' 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2387904 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2387904 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2387904' 00:24:27.443 killing process with pid 2387904 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2387904 00:24:27.443 16:16:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2387904 00:24:29.989 16:16:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:29.989 16:16:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:29.989 16:16:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:29.989 16:16:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:29.989 16:16:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:29.989 16:16:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.989 16:16:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.989 16:16:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.997 16:16:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:31.997 00:24:31.997 real 0m23.070s 00:24:31.997 user 0m57.501s 00:24:31.997 sys 0m7.302s 00:24:31.997 16:16:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:31.997 16:16:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:31.997 ************************************ 00:24:31.997 END TEST nvmf_perf 00:24:31.997 ************************************ 00:24:31.997 16:16:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:31.997 16:16:07 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:31.997 16:16:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:31.997 16:16:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:31.997 16:16:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:31.997 ************************************ 00:24:31.997 START TEST nvmf_fio_host 00:24:31.997 ************************************ 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:31.997 * Looking for test storage... 00:24:31.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:31.997 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.998 16:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.998 16:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.998 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:31.998 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:31.998 16:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:31.998 16:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:38.582 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:38.582 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:38.582 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:38.582 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:38.582 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:38.854 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.854 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.854 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.854 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.854 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:38.854 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.854 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.854 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:39.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.511 ms 00:24:39.115 00:24:39.115 --- 10.0.0.2 ping statistics --- 00:24:39.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.115 rtt min/avg/max/mdev = 0.511/0.511/0.511/0.000 ms 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:24:39.115 00:24:39.115 --- 10.0.0.1 ping statistics --- 00:24:39.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.115 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2394947 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2394947 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2394947 ']' 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:39.115 16:16:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.115 [2024-07-15 16:16:14.823974] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:24:39.115 [2024-07-15 16:16:14.824024] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.115 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.115 [2024-07-15 16:16:14.889818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:39.115 [2024-07-15 16:16:14.955731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.115 [2024-07-15 16:16:14.955767] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.115 [2024-07-15 16:16:14.955774] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.115 [2024-07-15 16:16:14.955781] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.115 [2024-07-15 16:16:14.955787] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.115 [2024-07-15 16:16:14.955921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.115 [2024-07-15 16:16:14.956038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.115 [2024-07-15 16:16:14.956196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.115 [2024-07-15 16:16:14.956197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.055 16:16:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:40.055 16:16:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:24:40.055 16:16:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:40.055 [2024-07-15 16:16:15.738162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.055 16:16:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:40.055 16:16:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:40.055 16:16:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.055 16:16:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:40.315 Malloc1 00:24:40.315 16:16:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:40.315 16:16:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:40.574 16:16:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.834 [2024-07-15 16:16:16.455997] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:40.834 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:41.122 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:41.122 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:41.123 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:41.123 16:16:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:41.387 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:41.387 fio-3.35 00:24:41.387 Starting 1 thread 00:24:41.387 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.940 00:24:43.940 test: (groupid=0, jobs=1): err= 0: pid=2395482: Mon Jul 15 16:16:19 2024 00:24:43.940 read: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(90.9MiB/2004msec) 00:24:43.940 slat (usec): min=2, max=276, avg= 2.19, stdev= 2.59 00:24:43.940 clat (usec): min=3272, max=9265, avg=6175.80, stdev=1217.07 00:24:43.940 lat (usec): min=3274, max=9267, avg=6177.99, stdev=1217.09 00:24:43.940 clat percentiles (usec): 00:24:43.940 | 1.00th=[ 4178], 5.00th=[ 4621], 10.00th=[ 4817], 20.00th=[ 5080], 00:24:43.940 | 30.00th=[ 5276], 40.00th=[ 5407], 50.00th=[ 5735], 60.00th=[ 6521], 00:24:43.940 | 70.00th=[ 7177], 80.00th=[ 7504], 90.00th=[ 7898], 95.00th=[ 8160], 00:24:43.940 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[ 8848], 99.95th=[ 8979], 00:24:43.940 | 99.99th=[ 9241] 00:24:43.940 bw ( KiB/s): min=36392, max=54296, per=99.86%, avg=46400.00, stdev=9205.00, samples=4 00:24:43.940 iops : min= 9098, max=13574, avg=11600.00, stdev=2301.25, samples=4 00:24:43.940 write: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(90.3MiB/2004msec); 0 zone resets 00:24:43.940 slat (usec): min=2, max=271, avg= 2.29, stdev= 1.96 00:24:43.940 clat (usec): min=2196, max=8123, avg=4815.04, stdev=1030.10 00:24:43.940 lat (usec): min=2199, max=8125, avg=4817.33, stdev=1030.15 00:24:43.940 clat percentiles (usec): 00:24:43.940 | 1.00th=[ 2868], 5.00th=[ 3392], 10.00th=[ 3720], 20.00th=[ 3982], 00:24:43.940 | 30.00th=[ 4146], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4752], 00:24:43.941 | 70.00th=[ 5669], 80.00th=[ 5997], 90.00th=[ 6259], 95.00th=[ 6521], 00:24:43.941 | 99.00th=[ 6849], 99.50th=[ 6980], 99.90th=[ 7177], 99.95th=[ 7308], 00:24:43.941 | 99.99th=[ 7635] 00:24:43.941 bw ( KiB/s): min=37376, max=54400, per=100.00%, avg=46124.00, stdev=8814.72, samples=4 00:24:43.941 iops : min= 9344, max=13600, avg=11531.00, stdev=2203.68, samples=4 00:24:43.941 lat (msec) : 4=10.88%, 10=89.12% 00:24:43.941 cpu : usr=65.60%, sys=28.96%, ctx=41, majf=0, minf=7 00:24:43.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:43.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:43.941 issued rwts: total=23280,23109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:43.941 00:24:43.941 Run status group 0 (all jobs): 00:24:43.941 READ: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=90.9MiB (95.4MB), run=2004-2004msec 00:24:43.941 WRITE: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=90.3MiB (94.7MB), run=2004-2004msec 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:43.941 16:16:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:44.201 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:44.201 fio-3.35 00:24:44.201 Starting 1 thread 00:24:44.201 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.749 00:24:46.749 test: (groupid=0, jobs=1): err= 0: pid=2396304: Mon Jul 15 16:16:22 2024 00:24:46.749 read: IOPS=8554, BW=134MiB/s (140MB/s)(268MiB/2003msec) 00:24:46.749 slat (usec): min=3, max=110, avg= 3.63, stdev= 1.46 00:24:46.749 clat (usec): min=2770, max=52399, avg=9225.48, stdev=4065.83 00:24:46.749 lat (usec): min=2774, max=52403, avg=9229.10, stdev=4065.92 00:24:46.749 clat percentiles (usec): 00:24:46.749 | 1.00th=[ 4359], 5.00th=[ 5473], 10.00th=[ 6063], 20.00th=[ 6849], 00:24:46.749 | 30.00th=[ 7504], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9503], 00:24:46.749 | 70.00th=[10159], 80.00th=[11076], 90.00th=[12256], 95.00th=[13173], 00:24:46.749 | 99.00th=[16450], 99.50th=[50070], 99.90th=[52167], 99.95th=[52167], 00:24:46.749 | 99.99th=[52167] 00:24:46.749 bw ( KiB/s): min=51744, max=81760, per=50.70%, avg=69392.00, stdev=14726.34, samples=4 00:24:46.749 iops : min= 3234, max= 5110, avg=4337.00, stdev=920.40, samples=4 00:24:46.749 write: IOPS=5018, BW=78.4MiB/s (82.2MB/s)(142MiB/1813msec); 0 zone resets 00:24:46.749 slat (usec): min=40, max=327, avg=41.13, stdev= 7.68 00:24:46.749 clat (usec): min=2434, max=54620, avg=9836.01, stdev=2542.96 00:24:46.749 lat (usec): min=2474, max=54660, avg=9877.15, stdev=2544.37 00:24:46.749 clat percentiles (usec): 00:24:46.749 | 1.00th=[ 6652], 5.00th=[ 7439], 10.00th=[ 7832], 20.00th=[ 8356], 00:24:46.749 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:24:46.749 | 70.00th=[10421], 80.00th=[11076], 90.00th=[11994], 95.00th=[12649], 00:24:46.749 | 99.00th=[15008], 99.50th=[15401], 99.90th=[53216], 99.95th=[54264], 00:24:46.749 | 99.99th=[54789] 00:24:46.749 bw ( KiB/s): min=54784, max=84224, per=90.04%, avg=72296.00, stdev=14415.10, samples=4 00:24:46.749 iops : min= 3424, max= 5264, avg=4518.50, stdev=900.94, samples=4 00:24:46.749 lat (msec) : 4=0.33%, 10=65.11%, 20=34.07%, 50=0.07%, 100=0.42% 00:24:46.749 cpu : usr=81.17%, sys=15.33%, ctx=17, majf=0, minf=14 00:24:46.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:46.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:46.749 issued rwts: total=17134,9098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.749 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:46.749 00:24:46.749 Run status group 0 (all jobs): 00:24:46.749 READ: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=268MiB (281MB), run=2003-2003msec 00:24:46.749 WRITE: bw=78.4MiB/s (82.2MB/s), 78.4MiB/s-78.4MiB/s (82.2MB/s-82.2MB/s), io=142MiB (149MB), run=1813-1813msec 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:46.749 rmmod nvme_tcp 00:24:46.749 rmmod nvme_fabrics 00:24:46.749 rmmod nvme_keyring 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2394947 ']' 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2394947 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2394947 ']' 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2394947 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2394947 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2394947' 00:24:46.749 killing process with pid 2394947 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2394947 00:24:46.749 16:16:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2394947 00:24:47.010 16:16:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:47.010 16:16:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:47.010 16:16:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:47.010 16:16:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:47.010 16:16:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:47.010 16:16:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.010 16:16:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:47.010 16:16:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.923 16:16:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:48.923 00:24:48.923 real 0m17.308s 00:24:48.923 user 1m9.992s 00:24:48.923 sys 0m7.379s 00:24:48.923 16:16:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:48.923 16:16:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.923 ************************************ 00:24:48.923 END TEST nvmf_fio_host 00:24:48.923 ************************************ 00:24:48.923 16:16:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:48.923 16:16:24 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:48.923 16:16:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:48.923 16:16:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:48.923 16:16:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:48.923 ************************************ 00:24:48.923 START TEST nvmf_failover 00:24:48.923 ************************************ 00:24:48.923 16:16:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:49.184 * Looking for test storage... 00:24:49.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:49.184 16:16:24 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.184 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:49.184 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.184 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.184 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.184 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.184 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.184 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:49.185 16:16:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:57.325 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:57.325 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:57.325 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:57.325 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:57.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:24:57.325 00:24:57.325 --- 10.0.0.2 ping statistics --- 00:24:57.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.325 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.384 ms 00:24:57.325 00:24:57.325 --- 10.0.0.1 ping statistics --- 00:24:57.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.325 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:57.325 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.326 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:57.326 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:57.326 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.326 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:57.326 16:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2400878 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2400878 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2400878 ']' 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.326 [2024-07-15 16:16:32.084573] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:24:57.326 [2024-07-15 16:16:32.084642] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.326 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.326 [2024-07-15 16:16:32.171209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:57.326 [2024-07-15 16:16:32.252811] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.326 [2024-07-15 16:16:32.252865] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.326 [2024-07-15 16:16:32.252873] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.326 [2024-07-15 16:16:32.252880] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.326 [2024-07-15 16:16:32.252886] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.326 [2024-07-15 16:16:32.253014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.326 [2024-07-15 16:16:32.253180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:57.326 [2024-07-15 16:16:32.253216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.326 16:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:57.326 [2024-07-15 16:16:33.048100] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.326 16:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:57.585 Malloc0 00:24:57.585 16:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:57.844 16:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:57.844 16:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.104 [2024-07-15 16:16:33.746640] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.104 16:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:58.104 [2024-07-15 16:16:33.915068] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:58.104 16:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:58.362 [2024-07-15 16:16:34.083574] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:58.362 16:16:34 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2401320 00:24:58.362 16:16:34 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:58.362 16:16:34 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:58.362 16:16:34 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2401320 /var/tmp/bdevperf.sock 00:24:58.363 16:16:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2401320 ']' 00:24:58.363 16:16:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:58.363 16:16:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:58.363 16:16:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:58.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:58.363 16:16:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:58.363 16:16:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:59.301 16:16:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:59.301 16:16:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:59.301 16:16:34 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:59.560 NVMe0n1 00:24:59.561 16:16:35 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:59.859 00:24:59.859 16:16:35 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2401596 00:24:59.859 16:16:35 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:59.859 16:16:35 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:00.800 16:16:36 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.060 [2024-07-15 16:16:36.702254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.060 [2024-07-15 16:16:36.702297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.060 [2024-07-15 16:16:36.702302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.060 [2024-07-15 16:16:36.702307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.060 [2024-07-15 16:16:36.702311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.060 [2024-07-15 16:16:36.702317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.061 [2024-07-15 16:16:36.702326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.061 [2024-07-15 16:16:36.702331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.061 [2024-07-15 16:16:36.702335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.061 [2024-07-15 16:16:36.702339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.061 [2024-07-15 16:16:36.702344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.061 [2024-07-15 16:16:36.702348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.061 [2024-07-15 16:16:36.702353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.061 [2024-07-15 16:16:36.702357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.061 [2024-07-15 16:16:36.702361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.061 [2024-07-15 16:16:36.702366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.061 [2024-07-15 16:16:36.702370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.061 [2024-07-15 16:16:36.702374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.061 [2024-07-15 16:16:36.702379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.061 [2024-07-15 16:16:36.702383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.061 [2024-07-15 16:16:36.702387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc50 is same with the state(5) to be set 00:25:01.061 16:16:36 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:04.359 16:16:39 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:04.359 00:25:04.359 16:16:40 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:04.619 [2024-07-15 16:16:40.225562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.619 [2024-07-15 16:16:40.225602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.619 [2024-07-15 16:16:40.225608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.620 [2024-07-15 16:16:40.225979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.225983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.225988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.225992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.225996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 [2024-07-15 16:16:40.226171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adf370 is same with the state(5) to be set 00:25:04.621 16:16:40 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:07.914 16:16:43 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.914 [2024-07-15 16:16:43.394642] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.914 16:16:43 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:08.856 16:16:44 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:08.856 [2024-07-15 16:16:44.573104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.856 [2024-07-15 16:16:44.573352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 [2024-07-15 16:16:44.573556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adfa70 is same with the state(5) to be set 00:25:08.857 16:16:44 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2401596 00:25:15.440 0 00:25:15.440 16:16:50 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2401320 00:25:15.440 16:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2401320 ']' 00:25:15.440 16:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2401320 00:25:15.440 16:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:15.440 16:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:15.440 16:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2401320 00:25:15.440 16:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:15.440 16:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:15.440 16:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2401320' 00:25:15.440 killing process with pid 2401320 00:25:15.440 16:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2401320 00:25:15.440 16:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2401320 00:25:15.440 16:16:50 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:15.440 [2024-07-15 16:16:34.171523] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:25:15.440 [2024-07-15 16:16:34.171583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2401320 ] 00:25:15.440 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.440 [2024-07-15 16:16:34.230525] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.440 [2024-07-15 16:16:34.294754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.440 Running I/O for 15 seconds... 00:25:15.440 [2024-07-15 16:16:36.705178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.440 [2024-07-15 16:16:36.705214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.440 [2024-07-15 16:16:36.705239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.440 [2024-07-15 16:16:36.705256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.440 [2024-07-15 16:16:36.705273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.440 [2024-07-15 16:16:36.705290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.440 [2024-07-15 16:16:36.705307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.440 [2024-07-15 16:16:36.705323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.440 [2024-07-15 16:16:36.705339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.440 [2024-07-15 16:16:36.705355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.440 [2024-07-15 16:16:36.705371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.440 [2024-07-15 16:16:36.705387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.440 [2024-07-15 16:16:36.705409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.440 [2024-07-15 16:16:36.705425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.440 [2024-07-15 16:16:36.705443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.440 [2024-07-15 16:16:36.705459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.440 [2024-07-15 16:16:36.705476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.440 [2024-07-15 16:16:36.705491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.440 [2024-07-15 16:16:36.705508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.440 [2024-07-15 16:16:36.705524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.440 [2024-07-15 16:16:36.705541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.440 [2024-07-15 16:16:36.705558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.440 [2024-07-15 16:16:36.705575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.440 [2024-07-15 16:16:36.705592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.440 [2024-07-15 16:16:36.705611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.440 [2024-07-15 16:16:36.705627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.440 [2024-07-15 16:16:36.705643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.440 [2024-07-15 16:16:36.705652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.705986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.705995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.441 [2024-07-15 16:16:36.706350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.441 [2024-07-15 16:16:36.706360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.442 [2024-07-15 16:16:36.706367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.442 [2024-07-15 16:16:36.706383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.442 [2024-07-15 16:16:36.706400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.442 [2024-07-15 16:16:36.706416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.442 [2024-07-15 16:16:36.706432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.442 [2024-07-15 16:16:36.706450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.442 [2024-07-15 16:16:36.706466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.706495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97224 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.706502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.442 [2024-07-15 16:16:36.706548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.442 [2024-07-15 16:16:36.706564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.442 [2024-07-15 16:16:36.706578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.442 [2024-07-15 16:16:36.706595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf25ef0 is same with the state(5) to be set 00:25:15.442 [2024-07-15 16:16:36.706774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.706781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.706787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97232 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.706795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.706809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.706816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97240 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.706822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.706835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.706842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97248 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.706849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.706865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.706871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97256 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.706878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.706891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.706897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97264 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.706904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.706917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.706923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97272 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.706930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.706943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.706949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97280 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.706956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.706968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.706975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97288 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.706982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.706989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.706995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.707001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97296 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.707009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.707016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.707022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.707027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97304 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.707035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.707042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.707047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.707054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97312 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.707062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.707070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.707075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.707081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97320 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.707088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.707095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.707100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.707106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97328 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.707113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.707126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.707132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.707138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97336 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.707145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.707152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.707158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.707164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97344 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.707171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.707178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.707184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.707190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97352 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.707197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.707204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.707210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.707216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97360 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.707223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.442 [2024-07-15 16:16:36.707230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.442 [2024-07-15 16:16:36.707236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.442 [2024-07-15 16:16:36.707242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97368 len:8 PRP1 0x0 PRP2 0x0 00:25:15.442 [2024-07-15 16:16:36.707249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97376 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97384 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97392 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97400 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97408 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97416 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97424 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97432 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97440 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97448 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97456 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97464 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97472 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97480 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97488 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97496 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96592 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96600 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96608 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96616 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.707778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.707785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.707791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.707796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96624 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.717861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.717893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.717901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.717911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96632 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.717920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.717929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.717936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.717943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96640 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.717951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.717960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.717966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.717978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96648 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.717985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.443 [2024-07-15 16:16:36.717992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.443 [2024-07-15 16:16:36.717998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.443 [2024-07-15 16:16:36.718004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96656 len:8 PRP1 0x0 PRP2 0x0 00:25:15.443 [2024-07-15 16:16:36.718011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96664 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96672 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96680 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96688 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96696 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96704 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97504 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96712 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96488 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96496 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96504 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96512 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96520 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96528 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96536 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96544 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96552 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96560 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96568 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.444 [2024-07-15 16:16:36.718522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.444 [2024-07-15 16:16:36.718528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96576 len:8 PRP1 0x0 PRP2 0x0 00:25:15.444 [2024-07-15 16:16:36.718534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.444 [2024-07-15 16:16:36.718542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96584 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.718560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.718567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96720 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.718586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.718594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96728 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.718611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.718619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96736 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.718636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.718643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96744 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.718661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.718668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96752 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.718686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.718693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96760 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.718711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.718718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96768 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.718735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.718742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96776 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.718761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.718769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96784 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.718789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.718797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96792 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.718818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.718827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96800 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.718845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.718853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96808 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.718873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.718880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96816 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.718898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.718906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96824 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.718924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.718931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96832 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.718951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.718958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96840 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.718977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.718984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.718991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.718997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96848 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.719004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.719012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.719017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.719026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96856 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.719033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.719041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.719047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.719052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96864 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.719060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.719067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.719073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.719079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96872 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.719086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.719093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.719098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.719104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96880 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.719112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.719119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.719128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.719135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96888 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.719152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.719160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.719167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.719173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96896 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.719179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.719187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.719192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.445 [2024-07-15 16:16:36.719200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96904 len:8 PRP1 0x0 PRP2 0x0 00:25:15.445 [2024-07-15 16:16:36.719207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.445 [2024-07-15 16:16:36.719217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.445 [2024-07-15 16:16:36.719222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.719229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96912 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.719237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.719245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.719251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.719257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96920 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.719264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.719272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.719280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.719287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96928 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.719296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.719306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.719311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.719317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96936 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.719325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.719332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.719337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.719343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96944 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.719351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.719359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.719364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.719370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96952 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.719377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.719384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.719389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.719395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96960 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.719402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.719410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.719415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.719421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96968 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.719429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.719437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.719443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.719449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96976 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.719456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.719464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.719469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.719476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96984 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.719484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.719491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.719497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.719503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96992 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.719512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.719519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.719525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.719532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97000 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.719538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.719547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.719553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.719560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97008 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.719567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.719574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.719580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.719587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97016 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.719594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.719601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.727571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.727600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97024 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.727610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.727622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.727632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.727639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97032 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.727646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.727654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.727660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.727666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97040 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.727672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.727680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.727685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.727691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97048 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.727698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.727706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.727711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.727717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97056 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.727724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.727731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.727737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.727742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97064 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.727749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.727757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.727762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.727768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97072 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.727775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.727782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.727788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.727794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97080 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.727800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.727808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.727813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.727819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97088 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.727826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.727834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.727840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.727846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97096 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.727853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.727860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.446 [2024-07-15 16:16:36.727866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.446 [2024-07-15 16:16:36.727872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97104 len:8 PRP1 0x0 PRP2 0x0 00:25:15.446 [2024-07-15 16:16:36.727878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.446 [2024-07-15 16:16:36.727886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.447 [2024-07-15 16:16:36.727892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.447 [2024-07-15 16:16:36.727898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97112 len:8 PRP1 0x0 PRP2 0x0 00:25:15.447 [2024-07-15 16:16:36.727906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:36.727914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.447 [2024-07-15 16:16:36.727919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.447 [2024-07-15 16:16:36.727925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97120 len:8 PRP1 0x0 PRP2 0x0 00:25:15.447 [2024-07-15 16:16:36.727932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:36.727940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.447 [2024-07-15 16:16:36.727945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.447 [2024-07-15 16:16:36.727951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97128 len:8 PRP1 0x0 PRP2 0x0 00:25:15.447 [2024-07-15 16:16:36.727958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:36.727965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.447 [2024-07-15 16:16:36.727970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.447 [2024-07-15 16:16:36.727976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97136 len:8 PRP1 0x0 PRP2 0x0 00:25:15.447 [2024-07-15 16:16:36.727983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:36.727990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.447 [2024-07-15 16:16:36.727995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.447 [2024-07-15 16:16:36.728001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97144 len:8 PRP1 0x0 PRP2 0x0 00:25:15.447 [2024-07-15 16:16:36.728008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:36.728015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.447 [2024-07-15 16:16:36.728021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.447 [2024-07-15 16:16:36.728026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97152 len:8 PRP1 0x0 PRP2 0x0 00:25:15.447 [2024-07-15 16:16:36.728035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:36.728043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.447 [2024-07-15 16:16:36.728048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.447 [2024-07-15 16:16:36.728055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97160 len:8 PRP1 0x0 PRP2 0x0 00:25:15.447 [2024-07-15 16:16:36.728063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:36.728070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.447 [2024-07-15 16:16:36.728075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.447 [2024-07-15 16:16:36.728081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97168 len:8 PRP1 0x0 PRP2 0x0 00:25:15.447 [2024-07-15 16:16:36.728088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:36.728096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.447 [2024-07-15 16:16:36.728101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.447 [2024-07-15 16:16:36.728107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97176 len:8 PRP1 0x0 PRP2 0x0 00:25:15.447 [2024-07-15 16:16:36.728114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:36.728127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.447 [2024-07-15 16:16:36.728133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.447 [2024-07-15 16:16:36.728139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97184 len:8 PRP1 0x0 PRP2 0x0 00:25:15.447 [2024-07-15 16:16:36.728146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:36.728153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.447 [2024-07-15 16:16:36.728158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.447 [2024-07-15 16:16:36.728164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97192 len:8 PRP1 0x0 PRP2 0x0 00:25:15.447 [2024-07-15 16:16:36.728171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:36.728179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.447 [2024-07-15 16:16:36.728184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.447 [2024-07-15 16:16:36.728189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97200 len:8 PRP1 0x0 PRP2 0x0 00:25:15.447 [2024-07-15 16:16:36.728197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:36.728204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.447 [2024-07-15 16:16:36.728209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.447 [2024-07-15 16:16:36.728215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97208 len:8 PRP1 0x0 PRP2 0x0 00:25:15.447 [2024-07-15 16:16:36.728223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:36.728230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.447 [2024-07-15 16:16:36.728235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.447 [2024-07-15 16:16:36.728243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97216 len:8 PRP1 0x0 PRP2 0x0 00:25:15.447 [2024-07-15 16:16:36.728251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:36.728259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.447 [2024-07-15 16:16:36.728264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.447 [2024-07-15 16:16:36.728270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97224 len:8 PRP1 0x0 PRP2 0x0 00:25:15.447 [2024-07-15 16:16:36.728277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:36.728315] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf47300 was disconnected and freed. reset controller. 00:25:15.447 [2024-07-15 16:16:36.728325] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:15.447 [2024-07-15 16:16:36.728333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.447 [2024-07-15 16:16:36.728376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf25ef0 (9): Bad file descriptor 00:25:15.447 [2024-07-15 16:16:36.731895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.447 [2024-07-15 16:16:36.770079] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:15.447 [2024-07-15 16:16:40.227514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.447 [2024-07-15 16:16:40.227551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:40.227570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.447 [2024-07-15 16:16:40.227578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:40.227588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.447 [2024-07-15 16:16:40.227595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:40.227605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.447 [2024-07-15 16:16:40.227612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:40.227622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.447 [2024-07-15 16:16:40.227629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:40.227638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.447 [2024-07-15 16:16:40.227645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:40.227655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.447 [2024-07-15 16:16:40.227661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:40.227671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.447 [2024-07-15 16:16:40.227683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:40.227692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.447 [2024-07-15 16:16:40.227700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:40.227709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.447 [2024-07-15 16:16:40.227716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:40.227725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.447 [2024-07-15 16:16:40.227732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:40.227741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.447 [2024-07-15 16:16:40.227748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:40.227757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.447 [2024-07-15 16:16:40.227764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.447 [2024-07-15 16:16:40.227774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.447 [2024-07-15 16:16:40.227780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.227790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.227797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.227806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.227813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.227823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.227830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.227840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.227847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.227856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.227863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.227873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.227881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.227893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.227900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.227910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.227917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.227926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.227934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.227943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.227950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.227959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.227966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.227975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.227982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.227992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.227999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.228008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.228014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.228024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.228031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.228040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.228047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.228056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.228063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.228072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.228079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.228088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.228096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.228107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.228114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.228130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.228137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.228147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.228153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.228163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.228169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.228178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.228186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.228195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.228202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.448 [2024-07-15 16:16:40.228211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.448 [2024-07-15 16:16:40.228218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.449 [2024-07-15 16:16:40.228700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.449 [2024-07-15 16:16:40.228710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.228722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.450 [2024-07-15 16:16:40.228731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.228743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.450 [2024-07-15 16:16:40.228753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.228763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.450 [2024-07-15 16:16:40.228770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.228779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.450 [2024-07-15 16:16:40.228787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.228797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.450 [2024-07-15 16:16:40.228804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.228813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.450 [2024-07-15 16:16:40.228820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.228830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.450 [2024-07-15 16:16:40.228837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.228846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.450 [2024-07-15 16:16:40.228853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.228862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.450 [2024-07-15 16:16:40.228869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.228878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.228886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.228895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.228902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.228911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.228918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.228927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.228934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.228942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.228949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.228960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.228967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.228976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.228982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.228992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.450 [2024-07-15 16:16:40.228998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.450 [2024-07-15 16:16:40.229018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.450 [2024-07-15 16:16:40.229034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.450 [2024-07-15 16:16:40.229050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.450 [2024-07-15 16:16:40.229067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.450 [2024-07-15 16:16:40.229420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.450 [2024-07-15 16:16:40.229429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.451 [2024-07-15 16:16:40.229436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.451 [2024-07-15 16:16:40.229454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.451 [2024-07-15 16:16:40.229471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.451 [2024-07-15 16:16:40.229488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.451 [2024-07-15 16:16:40.229505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.451 [2024-07-15 16:16:40.229522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.451 [2024-07-15 16:16:40.229539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.451 [2024-07-15 16:16:40.229556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.451 [2024-07-15 16:16:40.229574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.451 [2024-07-15 16:16:40.229591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.451 [2024-07-15 16:16:40.229613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.451 [2024-07-15 16:16:40.229629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.451 [2024-07-15 16:16:40.229647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.451 [2024-07-15 16:16:40.229663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.451 [2024-07-15 16:16:40.229690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32976 len:8 PRP1 0x0 PRP2 0x0 00:25:15.451 [2024-07-15 16:16:40.229698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.451 [2024-07-15 16:16:40.229713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.451 [2024-07-15 16:16:40.229720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32984 len:8 PRP1 0x0 PRP2 0x0 00:25:15.451 [2024-07-15 16:16:40.229727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.451 [2024-07-15 16:16:40.229743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.451 [2024-07-15 16:16:40.229749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32992 len:8 PRP1 0x0 PRP2 0x0 00:25:15.451 [2024-07-15 16:16:40.229756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229793] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf49480 was disconnected and freed. reset controller. 00:25:15.451 [2024-07-15 16:16:40.229802] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:15.451 [2024-07-15 16:16:40.229821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.451 [2024-07-15 16:16:40.229829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.451 [2024-07-15 16:16:40.229847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.451 [2024-07-15 16:16:40.229863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.451 [2024-07-15 16:16:40.229882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:40.229889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.451 [2024-07-15 16:16:40.229921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf25ef0 (9): Bad file descriptor 00:25:15.451 [2024-07-15 16:16:40.233480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.451 [2024-07-15 16:16:40.266575] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:15.451 [2024-07-15 16:16:44.574708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.451 [2024-07-15 16:16:44.574747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:44.574764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.451 [2024-07-15 16:16:44.574773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:44.574783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.451 [2024-07-15 16:16:44.574790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:44.574800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.451 [2024-07-15 16:16:44.574807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:44.574816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.451 [2024-07-15 16:16:44.574824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:44.574833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.451 [2024-07-15 16:16:44.574840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:44.574849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.451 [2024-07-15 16:16:44.574856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:44.574867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.451 [2024-07-15 16:16:44.574874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:44.574884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.451 [2024-07-15 16:16:44.574890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:44.574900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.451 [2024-07-15 16:16:44.574907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:44.574917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.451 [2024-07-15 16:16:44.574929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:44.574938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.451 [2024-07-15 16:16:44.574945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:44.574954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.451 [2024-07-15 16:16:44.574961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:44.574971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.451 [2024-07-15 16:16:44.574978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:44.574987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.451 [2024-07-15 16:16:44.574994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:44.575003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.451 [2024-07-15 16:16:44.575011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:44.575020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.451 [2024-07-15 16:16:44.575027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.451 [2024-07-15 16:16:44.575037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.452 [2024-07-15 16:16:44.575731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.452 [2024-07-15 16:16:44.575739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.575748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.453 [2024-07-15 16:16:44.575756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.575765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.453 [2024-07-15 16:16:44.575772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.575781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.453 [2024-07-15 16:16:44.575788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.575797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.453 [2024-07-15 16:16:44.575804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.575814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.453 [2024-07-15 16:16:44.575821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.575830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.453 [2024-07-15 16:16:44.575837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.575847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.575854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.575863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.575870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.575879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.575886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.575895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.575902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.575911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.575918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.575926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.575934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.575943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.575950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.575959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.575968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.575977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.575984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.575995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.453 [2024-07-15 16:16:44.576356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.453 [2024-07-15 16:16:44.576365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.454 [2024-07-15 16:16:44.576846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.454 [2024-07-15 16:16:44.576875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.454 [2024-07-15 16:16:44.576882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45032 len:8 PRP1 0x0 PRP2 0x0 00:25:15.454 [2024-07-15 16:16:44.576890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576929] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf49270 was disconnected and freed. reset controller. 00:25:15.454 [2024-07-15 16:16:44.576939] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:15.454 [2024-07-15 16:16:44.576959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.454 [2024-07-15 16:16:44.576967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.454 [2024-07-15 16:16:44.576983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.576991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.454 [2024-07-15 16:16:44.576998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.577006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.454 [2024-07-15 16:16:44.577013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.454 [2024-07-15 16:16:44.577021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.454 [2024-07-15 16:16:44.577051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf25ef0 (9): Bad file descriptor 00:25:15.454 [2024-07-15 16:16:44.580599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.454 [2024-07-15 16:16:44.658024] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:15.454 00:25:15.454 Latency(us) 00:25:15.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.454 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:15.454 Verification LBA range: start 0x0 length 0x4000 00:25:15.454 NVMe0n1 : 15.01 11549.41 45.11 350.52 0.00 10728.29 1030.83 29709.65 00:25:15.454 =================================================================================================================== 00:25:15.454 Total : 11549.41 45.11 350.52 0.00 10728.29 1030.83 29709.65 00:25:15.454 Received shutdown signal, test time was about 15.000000 seconds 00:25:15.454 00:25:15.454 Latency(us) 00:25:15.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.454 =================================================================================================================== 00:25:15.454 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.454 16:16:50 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:15.454 16:16:50 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:15.455 16:16:50 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:15.455 16:16:50 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2404506 00:25:15.455 16:16:50 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2404506 /var/tmp/bdevperf.sock 00:25:15.455 16:16:50 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:15.455 16:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2404506 ']' 00:25:15.455 16:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.455 16:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:15.455 16:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.455 16:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:15.455 16:16:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:16.025 16:16:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:16.025 16:16:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:16.025 16:16:51 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:16.025 [2024-07-15 16:16:51.861930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:16.285 16:16:51 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:16.285 [2024-07-15 16:16:52.018268] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:16.285 16:16:52 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:16.546 NVMe0n1 00:25:16.546 16:16:52 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:17.131 00:25:17.131 16:16:52 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:17.131 00:25:17.131 16:16:52 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.131 16:16:52 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:17.390 16:16:53 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:17.650 16:16:53 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:20.949 16:16:56 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:20.949 16:16:56 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:20.949 16:16:56 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2405688 00:25:20.949 16:16:56 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:20.949 16:16:56 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2405688 00:25:21.916 0 00:25:21.916 16:16:57 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:21.916 [2024-07-15 16:16:50.956264] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:25:21.916 [2024-07-15 16:16:50.956322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2404506 ] 00:25:21.916 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.916 [2024-07-15 16:16:51.015264] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.916 [2024-07-15 16:16:51.079818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.916 [2024-07-15 16:16:53.247732] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:21.916 [2024-07-15 16:16:53.247777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.916 [2024-07-15 16:16:53.247788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.916 [2024-07-15 16:16:53.247797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.916 [2024-07-15 16:16:53.247804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.916 [2024-07-15 16:16:53.247812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.916 [2024-07-15 16:16:53.247819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.916 [2024-07-15 16:16:53.247827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.916 [2024-07-15 16:16:53.247834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.916 [2024-07-15 16:16:53.247841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.916 [2024-07-15 16:16:53.247867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.916 [2024-07-15 16:16:53.247881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e05ef0 (9): Bad file descriptor 00:25:21.916 [2024-07-15 16:16:53.255639] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:21.916 Running I/O for 1 seconds... 00:25:21.916 00:25:21.916 Latency(us) 00:25:21.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.916 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:21.916 Verification LBA range: start 0x0 length 0x4000 00:25:21.916 NVMe0n1 : 1.01 11202.68 43.76 0.00 0.00 11365.68 2607.79 12451.84 00:25:21.916 =================================================================================================================== 00:25:21.916 Total : 11202.68 43.76 0.00 0.00 11365.68 2607.79 12451.84 00:25:21.916 16:16:57 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:21.916 16:16:57 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:21.916 16:16:57 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.177 16:16:57 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.177 16:16:57 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:22.438 16:16:58 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.438 16:16:58 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:25.795 16:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:25.795 16:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:25.795 16:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2404506 00:25:25.795 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2404506 ']' 00:25:25.795 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2404506 00:25:25.795 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:25.795 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:25.795 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2404506 00:25:25.795 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:25.795 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:25.795 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2404506' 00:25:25.795 killing process with pid 2404506 00:25:25.795 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2404506 00:25:25.795 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2404506 00:25:26.098 16:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:26.098 16:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:26.098 16:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:26.098 16:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:26.098 16:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:26.098 16:17:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:26.098 16:17:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:26.098 16:17:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:26.098 16:17:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:26.098 16:17:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:26.098 16:17:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:26.098 rmmod nvme_tcp 00:25:26.098 rmmod nvme_fabrics 00:25:26.098 rmmod nvme_keyring 00:25:26.099 16:17:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:26.099 16:17:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:26.099 16:17:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:26.099 16:17:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2400878 ']' 00:25:26.099 16:17:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2400878 00:25:26.099 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2400878 ']' 00:25:26.099 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2400878 00:25:26.099 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:26.099 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:26.099 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2400878 00:25:26.099 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:26.099 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:26.099 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2400878' 00:25:26.099 killing process with pid 2400878 00:25:26.099 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2400878 00:25:26.099 16:17:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2400878 00:25:26.359 16:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:26.359 16:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:26.359 16:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:26.359 16:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:26.359 16:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:26.359 16:17:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.359 16:17:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:26.359 16:17:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.271 16:17:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:28.271 00:25:28.271 real 0m39.353s 00:25:28.271 user 2m1.502s 00:25:28.271 sys 0m8.085s 00:25:28.272 16:17:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:28.272 16:17:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:28.272 ************************************ 00:25:28.272 END TEST nvmf_failover 00:25:28.272 ************************************ 00:25:28.532 16:17:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:28.532 16:17:04 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:28.532 16:17:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:28.532 16:17:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.532 16:17:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:28.532 ************************************ 00:25:28.532 START TEST nvmf_host_discovery 00:25:28.532 ************************************ 00:25:28.532 16:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:28.532 * Looking for test storage... 00:25:28.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:28.532 16:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.532 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:28.532 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.532 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:28.533 16:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:36.677 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:36.678 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:36.678 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:36.678 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:36.678 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:36.678 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:36.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:25:36.678 00:25:36.678 --- 10.0.0.2 ping statistics --- 00:25:36.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.679 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:36.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.433 ms 00:25:36.679 00:25:36.679 --- 10.0.0.1 ping statistics --- 00:25:36.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.679 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2410700 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2410700 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2410700 ']' 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:36.679 16:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.679 [2024-07-15 16:17:11.700092] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:25:36.679 [2024-07-15 16:17:11.700162] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.679 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.679 [2024-07-15 16:17:11.789638] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.679 [2024-07-15 16:17:11.883178] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.679 [2024-07-15 16:17:11.883234] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.679 [2024-07-15 16:17:11.883243] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.679 [2024-07-15 16:17:11.883250] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.679 [2024-07-15 16:17:11.883255] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.679 [2024-07-15 16:17:11.883283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.679 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:36.679 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:36.679 16:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:36.679 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:36.679 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.941 16:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.941 16:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:36.941 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.941 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.941 [2024-07-15 16:17:12.535410] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.941 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.942 [2024-07-15 16:17:12.547613] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.942 null0 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.942 null1 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2411048 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2411048 /tmp/host.sock 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2411048 ']' 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:36.942 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:36.942 16:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.942 [2024-07-15 16:17:12.642333] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:25:36.942 [2024-07-15 16:17:12.642395] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411048 ] 00:25:36.942 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.942 [2024-07-15 16:17:12.705893] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.942 [2024-07-15 16:17:12.780487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:37.882 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.883 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.144 [2024-07-15 16:17:13.774745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:38.144 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:38.145 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:38.145 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.145 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.145 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.145 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:38.145 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:38.145 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:38.145 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:38.145 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:38.145 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:38.145 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.145 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.145 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.145 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.145 16:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.145 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.145 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.406 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:25:38.406 16:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:38.666 [2024-07-15 16:17:14.476351] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:38.666 [2024-07-15 16:17:14.476377] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:38.666 [2024-07-15 16:17:14.476392] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:38.925 [2024-07-15 16:17:14.564673] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:39.186 [2024-07-15 16:17:14.789527] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:39.186 [2024-07-15 16:17:14.789547] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:39.186 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:39.186 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:39.186 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:39.186 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.186 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.186 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.186 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.186 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.186 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.186 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.447 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.447 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.448 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.709 [2024-07-15 16:17:15.306626] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:39.709 [2024-07-15 16:17:15.307862] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:39.709 [2024-07-15 16:17:15.307889] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:39.709 [2024-07-15 16:17:15.438295] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:39.709 16:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:39.969 [2024-07-15 16:17:15.664546] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:39.969 [2024-07-15 16:17:15.664565] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:39.969 [2024-07-15 16:17:15.664571] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.912 [2024-07-15 16:17:16.590305] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:40.912 [2024-07-15 16:17:16.590328] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:40.912 [2024-07-15 16:17:16.591595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.912 [2024-07-15 16:17:16.591612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.912 [2024-07-15 16:17:16.591621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.912 [2024-07-15 16:17:16.591632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.912 [2024-07-15 16:17:16.591640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.912 [2024-07-15 16:17:16.591647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.912 [2024-07-15 16:17:16.591655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.912 [2024-07-15 16:17:16.591662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.912 [2024-07-15 16:17:16.591669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18209b0 is same with the state(5) to be set 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.912 [2024-07-15 16:17:16.601608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18209b0 (9): Bad file descriptor 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:40.912 [2024-07-15 16:17:16.611648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:40.912 [2024-07-15 16:17:16.612108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.912 [2024-07-15 16:17:16.612128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18209b0 with addr=10.0.0.2, port=4420 00:25:40.912 [2024-07-15 16:17:16.612137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18209b0 is same with the state(5) to be set 00:25:40.912 [2024-07-15 16:17:16.612148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18209b0 (9): Bad file descriptor 00:25:40.912 [2024-07-15 16:17:16.612159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:40.912 [2024-07-15 16:17:16.612165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:40.912 [2024-07-15 16:17:16.612173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:40.912 [2024-07-15 16:17:16.612184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.912 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.912 [2024-07-15 16:17:16.621708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:40.912 [2024-07-15 16:17:16.622149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.912 [2024-07-15 16:17:16.622164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18209b0 with addr=10.0.0.2, port=4420 00:25:40.912 [2024-07-15 16:17:16.622175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18209b0 is same with the state(5) to be set 00:25:40.912 [2024-07-15 16:17:16.622187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18209b0 (9): Bad file descriptor 00:25:40.912 [2024-07-15 16:17:16.622198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:40.912 [2024-07-15 16:17:16.622205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:40.912 [2024-07-15 16:17:16.622213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:40.912 [2024-07-15 16:17:16.622223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.912 [2024-07-15 16:17:16.631760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:40.912 [2024-07-15 16:17:16.632365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.912 [2024-07-15 16:17:16.632402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18209b0 with addr=10.0.0.2, port=4420 00:25:40.912 [2024-07-15 16:17:16.632415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18209b0 is same with the state(5) to be set 00:25:40.912 [2024-07-15 16:17:16.632435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18209b0 (9): Bad file descriptor 00:25:40.912 [2024-07-15 16:17:16.632462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:40.912 [2024-07-15 16:17:16.632470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:40.912 [2024-07-15 16:17:16.632478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:40.912 [2024-07-15 16:17:16.632494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.912 [2024-07-15 16:17:16.641812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:40.912 [2024-07-15 16:17:16.642104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.912 [2024-07-15 16:17:16.642120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18209b0 with addr=10.0.0.2, port=4420 00:25:40.912 [2024-07-15 16:17:16.642133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18209b0 is same with the state(5) to be set 00:25:40.912 [2024-07-15 16:17:16.642145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18209b0 (9): Bad file descriptor 00:25:40.912 [2024-07-15 16:17:16.642156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:40.912 [2024-07-15 16:17:16.642163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:40.912 [2024-07-15 16:17:16.642170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:40.912 [2024-07-15 16:17:16.642181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:40.913 [2024-07-15 16:17:16.651868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:40.913 [2024-07-15 16:17:16.652387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.913 [2024-07-15 16:17:16.652425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18209b0 with addr=10.0.0.2, port=4420 00:25:40.913 [2024-07-15 16:17:16.652438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18209b0 is same with the state(5) to be set 00:25:40.913 [2024-07-15 16:17:16.652461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18209b0 (9): Bad file descriptor 00:25:40.913 [2024-07-15 16:17:16.652474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:40.913 [2024-07-15 16:17:16.652481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:40.913 [2024-07-15 16:17:16.652489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:40.913 [2024-07-15 16:17:16.652504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.913 [2024-07-15 16:17:16.661922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:40.913 [2024-07-15 16:17:16.662446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.913 [2024-07-15 16:17:16.662484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18209b0 with addr=10.0.0.2, port=4420 00:25:40.913 [2024-07-15 16:17:16.662495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18209b0 is same with the state(5) to be set 00:25:40.913 [2024-07-15 16:17:16.662513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18209b0 (9): Bad file descriptor 00:25:40.913 [2024-07-15 16:17:16.662525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:40.913 [2024-07-15 16:17:16.662532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:40.913 [2024-07-15 16:17:16.662541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:40.913 [2024-07-15 16:17:16.662556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.913 [2024-07-15 16:17:16.671982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:40.913 [2024-07-15 16:17:16.672515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.913 [2024-07-15 16:17:16.672552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18209b0 with addr=10.0.0.2, port=4420 00:25:40.913 [2024-07-15 16:17:16.672563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18209b0 is same with the state(5) to be set 00:25:40.913 [2024-07-15 16:17:16.672582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18209b0 (9): Bad file descriptor 00:25:40.913 [2024-07-15 16:17:16.672594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:40.913 [2024-07-15 16:17:16.672601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:40.913 [2024-07-15 16:17:16.672609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:40.913 [2024-07-15 16:17:16.672631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.913 [2024-07-15 16:17:16.678775] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:40.913 [2024-07-15 16:17:16.678794] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.913 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.174 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.175 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:41.175 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.175 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:41.175 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:41.175 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:41.175 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:41.175 16:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:41.175 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.175 16:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.558 [2024-07-15 16:17:18.025388] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:42.558 [2024-07-15 16:17:18.025404] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:42.558 [2024-07-15 16:17:18.025416] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:42.558 [2024-07-15 16:17:18.111704] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:42.819 [2024-07-15 16:17:18.424336] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:42.819 [2024-07-15 16:17:18.424375] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.819 request: 00:25:42.819 { 00:25:42.819 "name": "nvme", 00:25:42.819 "trtype": "tcp", 00:25:42.819 "traddr": "10.0.0.2", 00:25:42.819 "adrfam": "ipv4", 00:25:42.819 "trsvcid": "8009", 00:25:42.819 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:42.819 "wait_for_attach": true, 00:25:42.819 "method": "bdev_nvme_start_discovery", 00:25:42.819 "req_id": 1 00:25:42.819 } 00:25:42.819 Got JSON-RPC error response 00:25:42.819 response: 00:25:42.819 { 00:25:42.819 "code": -17, 00:25:42.819 "message": "File exists" 00:25:42.819 } 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.819 request: 00:25:42.819 { 00:25:42.819 "name": "nvme_second", 00:25:42.819 "trtype": "tcp", 00:25:42.819 "traddr": "10.0.0.2", 00:25:42.819 "adrfam": "ipv4", 00:25:42.819 "trsvcid": "8009", 00:25:42.819 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:42.819 "wait_for_attach": true, 00:25:42.819 "method": "bdev_nvme_start_discovery", 00:25:42.819 "req_id": 1 00:25:42.819 } 00:25:42.819 Got JSON-RPC error response 00:25:42.819 response: 00:25:42.819 { 00:25:42.819 "code": -17, 00:25:42.819 "message": "File exists" 00:25:42.819 } 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:42.819 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:42.820 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:42.820 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:42.820 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.820 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:42.820 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.820 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:42.820 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:42.820 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.820 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:42.820 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:42.820 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.820 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.820 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.820 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.820 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.820 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.820 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.081 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:43.081 16:17:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:43.081 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:43.081 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:43.081 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:43.081 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:43.081 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:43.081 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:43.081 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:43.081 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.081 16:17:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.023 [2024-07-15 16:17:19.688908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.023 [2024-07-15 16:17:19.688936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185eec0 with addr=10.0.0.2, port=8010 00:25:44.023 [2024-07-15 16:17:19.688950] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:44.023 [2024-07-15 16:17:19.688957] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:44.023 [2024-07-15 16:17:19.688964] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:44.964 [2024-07-15 16:17:20.691435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.964 [2024-07-15 16:17:20.691475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185eec0 with addr=10.0.0.2, port=8010 00:25:44.964 [2024-07-15 16:17:20.691490] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:44.964 [2024-07-15 16:17:20.691498] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:44.964 [2024-07-15 16:17:20.691505] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:45.906 [2024-07-15 16:17:21.693238] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:45.906 request: 00:25:45.906 { 00:25:45.906 "name": "nvme_second", 00:25:45.906 "trtype": "tcp", 00:25:45.906 "traddr": "10.0.0.2", 00:25:45.906 "adrfam": "ipv4", 00:25:45.906 "trsvcid": "8010", 00:25:45.906 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:45.906 "wait_for_attach": false, 00:25:45.906 "attach_timeout_ms": 3000, 00:25:45.906 "method": "bdev_nvme_start_discovery", 00:25:45.906 "req_id": 1 00:25:45.906 } 00:25:45.906 Got JSON-RPC error response 00:25:45.906 response: 00:25:45.906 { 00:25:45.906 "code": -110, 00:25:45.906 "message": "Connection timed out" 00:25:45.906 } 00:25:45.906 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:45.906 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:45.906 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:45.906 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:45.906 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:45.906 16:17:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:45.906 16:17:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:45.906 16:17:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:45.906 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.906 16:17:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:45.906 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.906 16:17:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:45.906 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2411048 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:46.167 rmmod nvme_tcp 00:25:46.167 rmmod nvme_fabrics 00:25:46.167 rmmod nvme_keyring 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2410700 ']' 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2410700 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2410700 ']' 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2410700 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2410700 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2410700' 00:25:46.167 killing process with pid 2410700 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2410700 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2410700 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:46.167 16:17:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:48.725 00:25:48.725 real 0m19.877s 00:25:48.725 user 0m23.208s 00:25:48.725 sys 0m6.938s 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.725 ************************************ 00:25:48.725 END TEST nvmf_host_discovery 00:25:48.725 ************************************ 00:25:48.725 16:17:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:48.725 16:17:24 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:48.725 16:17:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:48.725 16:17:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:48.725 16:17:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:48.725 ************************************ 00:25:48.725 START TEST nvmf_host_multipath_status 00:25:48.725 ************************************ 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:48.725 * Looking for test storage... 00:25:48.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:48.725 16:17:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:55.384 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:55.384 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:55.384 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.384 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:55.385 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.385 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:55.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:25:55.650 00:25:55.650 --- 10.0.0.2 ping statistics --- 00:25:55.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.650 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:25:55.650 00:25:55.650 --- 10.0.0.1 ping statistics --- 00:25:55.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.650 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2416932 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2416932 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2416932 ']' 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:55.650 16:17:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:55.650 [2024-07-15 16:17:31.478667] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:25:55.650 [2024-07-15 16:17:31.478736] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.910 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.910 [2024-07-15 16:17:31.549631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:55.910 [2024-07-15 16:17:31.624108] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.910 [2024-07-15 16:17:31.624152] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.910 [2024-07-15 16:17:31.624160] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.910 [2024-07-15 16:17:31.624167] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.910 [2024-07-15 16:17:31.624173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.910 [2024-07-15 16:17:31.624248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.910 [2024-07-15 16:17:31.624249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.479 16:17:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:56.479 16:17:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:56.479 16:17:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:56.480 16:17:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:56.480 16:17:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:56.480 16:17:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.480 16:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2416932 00:25:56.480 16:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:56.740 [2024-07-15 16:17:32.420158] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.740 16:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:57.000 Malloc0 00:25:57.000 16:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:57.000 16:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:57.261 16:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:57.261 [2024-07-15 16:17:33.045846] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.261 16:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:57.521 [2024-07-15 16:17:33.186148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:57.521 16:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2417308 00:25:57.521 16:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:57.521 16:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:57.521 16:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2417308 /var/tmp/bdevperf.sock 00:25:57.521 16:17:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2417308 ']' 00:25:57.521 16:17:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:57.521 16:17:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:57.521 16:17:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:57.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:57.521 16:17:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:57.521 16:17:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.463 16:17:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:58.463 16:17:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:58.463 16:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:58.463 16:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:58.723 Nvme0n1 00:25:58.723 16:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:58.985 Nvme0n1 00:25:58.985 16:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:58.985 16:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:01.532 16:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:01.532 16:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:01.532 16:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:01.532 16:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:02.474 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:02.474 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:02.474 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.474 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:02.735 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.735 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:02.735 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.735 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:02.735 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:02.735 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:02.735 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.735 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:02.996 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.996 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:02.996 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.996 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:02.996 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.996 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:02.996 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.996 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:03.257 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.257 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:03.257 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.257 16:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:03.517 16:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.517 16:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:03.517 16:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:03.517 16:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:03.777 16:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:04.718 16:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:04.718 16:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:04.718 16:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.718 16:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:04.978 16:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.979 16:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:04.979 16:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.979 16:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.239 16:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.239 16:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.239 16:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.240 16:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.240 16:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.240 16:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.240 16:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.240 16:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:05.500 16:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.500 16:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:05.500 16:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.500 16:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:05.760 16:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.760 16:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:05.760 16:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:05.760 16:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.760 16:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.760 16:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:05.760 16:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:06.020 16:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:06.282 16:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:07.223 16:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:07.223 16:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:07.223 16:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.223 16:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:07.223 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.223 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:07.223 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.223 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.483 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.483 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.483 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.483 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.743 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.743 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:07.743 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.743 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:07.743 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.743 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:07.743 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.743 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.003 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.003 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:08.003 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.003 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.263 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.263 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:08.263 16:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:08.263 16:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:08.523 16:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:09.461 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:09.461 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:09.461 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.461 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:09.721 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.721 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:09.721 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.721 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:09.721 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:09.721 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:09.722 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.722 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:09.982 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.982 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:09.982 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.982 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:10.243 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.243 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:10.243 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.243 16:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:10.243 16:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.243 16:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:10.243 16:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.243 16:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:10.504 16:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.504 16:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:10.504 16:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:10.765 16:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:10.765 16:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:12.148 16:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:12.148 16:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:12.148 16:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.148 16:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:12.148 16:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:12.148 16:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:12.148 16:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.148 16:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:12.148 16:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:12.148 16:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:12.148 16:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.148 16:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:12.466 16:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.466 16:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:12.466 16:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.466 16:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:12.466 16:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.466 16:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:12.466 16:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.466 16:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:12.737 16:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:12.737 16:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:12.737 16:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.737 16:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:12.737 16:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:12.737 16:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:12.737 16:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:12.996 16:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:13.257 16:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:14.197 16:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:14.197 16:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:14.197 16:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.197 16:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:14.471 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:14.471 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:14.471 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.471 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:14.471 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.471 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:14.471 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.471 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:14.730 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.730 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:14.730 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.730 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:14.989 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.989 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:14.989 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.989 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:14.989 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:14.989 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:14.989 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.989 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:15.249 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.249 16:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:15.509 16:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:15.509 16:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:15.509 16:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:15.769 16:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:16.710 16:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:16.710 16:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:16.710 16:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.710 16:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:16.971 16:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.971 16:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:16.971 16:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.971 16:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:16.971 16:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.971 16:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:16.971 16:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.971 16:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:17.231 16:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.231 16:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:17.231 16:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.231 16:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:17.493 16:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.493 16:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:17.493 16:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.493 16:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:17.493 16:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.493 16:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:17.493 16:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.493 16:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:17.755 16:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.755 16:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:17.755 16:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:18.015 16:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:18.015 16:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:19.401 16:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:19.401 16:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:19.401 16:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.401 16:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:19.401 16:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:19.401 16:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:19.401 16:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.401 16:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:19.401 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.401 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:19.401 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.401 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:19.662 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.662 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:19.662 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.662 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:19.662 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.662 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:19.662 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:19.662 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.922 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.922 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:19.922 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.922 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:20.182 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.182 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:20.182 16:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:20.182 16:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:20.442 16:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:21.384 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:21.384 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:21.384 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.384 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:21.644 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.644 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:21.644 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.644 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:21.905 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.905 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:21.905 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.905 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:21.905 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.905 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:21.905 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.905 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:22.166 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.166 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:22.166 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.166 16:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:22.426 16:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.426 16:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:22.426 16:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.426 16:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:22.426 16:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.426 16:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:22.426 16:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:22.686 16:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:22.946 16:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:23.889 16:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:23.889 16:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:23.889 16:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.889 16:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:23.889 16:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.889 16:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:24.150 16:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.150 16:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:24.150 16:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.150 16:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:24.150 16:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.150 16:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:24.411 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.411 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:24.411 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.411 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:24.672 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.672 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:24.672 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.672 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:24.672 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.672 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:24.672 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.672 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:24.933 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.933 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2417308 00:26:24.933 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2417308 ']' 00:26:24.933 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2417308 00:26:24.933 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:24.933 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:24.933 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2417308 00:26:24.933 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:24.933 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:24.933 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2417308' 00:26:24.933 killing process with pid 2417308 00:26:24.933 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2417308 00:26:24.933 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2417308 00:26:24.933 Connection closed with partial response: 00:26:24.933 00:26:24.933 00:26:25.217 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2417308 00:26:25.217 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:25.217 [2024-07-15 16:17:33.246409] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:26:25.217 [2024-07-15 16:17:33.246469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2417308 ] 00:26:25.217 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.217 [2024-07-15 16:17:33.297552] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.217 [2024-07-15 16:17:33.350722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:25.217 Running I/O for 90 seconds... 00:26:25.217 [2024-07-15 16:17:46.361182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.217 [2024-07-15 16:17:46.361215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.361233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.361239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.361249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.361255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.361265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.361270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.361280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.361285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.361295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.361300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.361310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.361315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.361325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.361330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.361568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.361574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.361584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.361590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.361600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.361610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.361620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.361625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.361636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.361641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.361652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.361657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.361668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.361673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.361683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.361688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.361698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.361703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.362182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.362193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.362205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.362210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.362220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.362225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.362236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.362240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.362251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.362256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.362266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.362271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.362283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.362288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.217 [2024-07-15 16:17:46.362298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.217 [2024-07-15 16:17:46.362303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.218 [2024-07-15 16:17:46.362840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.218 [2024-07-15 16:17:46.362856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.218 [2024-07-15 16:17:46.362870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.218 [2024-07-15 16:17:46.362886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.218 [2024-07-15 16:17:46.362900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.218 [2024-07-15 16:17:46.362915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.362925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.218 [2024-07-15 16:17:46.362930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.218 [2024-07-15 16:17:46.363296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.219 [2024-07-15 16:17:46.363304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.363954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.363959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.364041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.364049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.364059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.364065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.364075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.364080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.364090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.364095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.364105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.364111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.364125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.364131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.364141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.364146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.364157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.364162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.364236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.364242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.219 [2024-07-15 16:17:46.364253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.219 [2024-07-15 16:17:46.364258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.220 [2024-07-15 16:17:46.364837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.364862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.364867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.220 [2024-07-15 16:17:46.365722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.220 [2024-07-15 16:17:46.365732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.365737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.365747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.365752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.365762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.365767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.365777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.365782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.365792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.365797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.365807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.365812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.365918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.365925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.365935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.365941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.365951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.365957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.365967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.365972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.365982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.365987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.365997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.366664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.366669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.367050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.367057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.367068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.367072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.367082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.221 [2024-07-15 16:17:46.367087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.221 [2024-07-15 16:17:46.367099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.367104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.367114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.367119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.367132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.367141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.367151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.367156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.367166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.367171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.367475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.367482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.367492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.222 [2024-07-15 16:17:46.367497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.367507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.222 [2024-07-15 16:17:46.367512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.367522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.222 [2024-07-15 16:17:46.367527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.367537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.222 [2024-07-15 16:17:46.367542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.367552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.222 [2024-07-15 16:17:46.367557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.367567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.222 [2024-07-15 16:17:46.367572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.377743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.222 [2024-07-15 16:17:46.377767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.377779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.377784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.377794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.377799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.377809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.377814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.377824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.377830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.377840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.377845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.377855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.377860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.377870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.377876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:25.222 [2024-07-15 16:17:46.378367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.222 [2024-07-15 16:17:46.378372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.223 [2024-07-15 16:17:46.378880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.223 [2024-07-15 16:17:46.378986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.223 [2024-07-15 16:17:46.378996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.224 [2024-07-15 16:17:46.379613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.224 [2024-07-15 16:17:46.379623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.379628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.379639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.379644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.379654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.379659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.379669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.379674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.379684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.379689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.379699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.379704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.379714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.379719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.379729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.379734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.379745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.379750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.380574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.380586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.380598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.380603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.380613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.225 [2024-07-15 16:17:46.380618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.380631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.225 [2024-07-15 16:17:46.380636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.380646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.225 [2024-07-15 16:17:46.380651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.380661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.225 [2024-07-15 16:17:46.380666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.380676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.225 [2024-07-15 16:17:46.380680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.380691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.225 [2024-07-15 16:17:46.380695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.380705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.225 [2024-07-15 16:17:46.380710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.380720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.380725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.380735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.380740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.380750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.380754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.380765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.380770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.380780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.380785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.380795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.380800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.382053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.382061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.382072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.382077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.382087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.382091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.382101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.382106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.382116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.382121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.382134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.382138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.382149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.382153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.382163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.382168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.382178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.382183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.382193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.225 [2024-07-15 16:17:46.382198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.225 [2024-07-15 16:17:46.382208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.382712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.382721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.389362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.389397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.389403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.389417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.389422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.389433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.389438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.389449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.389455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.389466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.389473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.389484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.389489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.389501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.389508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.389519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.389524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.389537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.389542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.389554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.389560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.389570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.389575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.389585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.389590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.389601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.389606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.389618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.389623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.389633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.389637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.226 [2024-07-15 16:17:46.389647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.226 [2024-07-15 16:17:46.389652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.389662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.389667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.389677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.389682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.389691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.389696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.389707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.227 [2024-07-15 16:17:46.389712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.227 [2024-07-15 16:17:46.390632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.227 [2024-07-15 16:17:46.390642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.390978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.390989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.228 [2024-07-15 16:17:46.390993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.228 [2024-07-15 16:17:46.391009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.228 [2024-07-15 16:17:46.391023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.228 [2024-07-15 16:17:46.391038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.228 [2024-07-15 16:17:46.391053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.228 [2024-07-15 16:17:46.391068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.228 [2024-07-15 16:17:46.391083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.391099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.391114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.391134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.391150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.391165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.391180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.391195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.391210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.391225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.391239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.228 [2024-07-15 16:17:46.391255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.228 [2024-07-15 16:17:46.391264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.391269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.391280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.391284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.391296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.391301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.391311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.391315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.391326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.391330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.391340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.391345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.391355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.391360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.391370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.391375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.391385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.391390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.391400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.391405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.391415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.391420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:25.229 [2024-07-15 16:17:46.392510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.229 [2024-07-15 16:17:46.392514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.392524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.392529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.392539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.392544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.392554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.392559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.392569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.392574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.392584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.392589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.392599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.392603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.392613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.392618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.392628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.392633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.392643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.392648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.392943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.230 [2024-07-15 16:17:46.392951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.392966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.392971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.392981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.392985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.392995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.230 [2024-07-15 16:17:46.393518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.230 [2024-07-15 16:17:46.393528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.393993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.393999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.394080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.394096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.394113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.394133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.394148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.394163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.394179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.394195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.394653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.394669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.394684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.231 [2024-07-15 16:17:46.394699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.231 [2024-07-15 16:17:46.394715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.231 [2024-07-15 16:17:46.394731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.231 [2024-07-15 16:17:46.394748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.231 [2024-07-15 16:17:46.394766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.231 [2024-07-15 16:17:46.394782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.231 [2024-07-15 16:17:46.394796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.231 [2024-07-15 16:17:46.394807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.231 [2024-07-15 16:17:46.394811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.394821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.394826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.394836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.394841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.394851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.394856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.394866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.399990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.399995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.400005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.400010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.400020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.400025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.232 [2024-07-15 16:17:46.400035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.232 [2024-07-15 16:17:46.400040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.233 [2024-07-15 16:17:46.400388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.233 [2024-07-15 16:17:46.400592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.233 [2024-07-15 16:17:46.400597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.400994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.400999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.401009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.401014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.401024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.401029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.401039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.401045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.401055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.401060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.401070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.401075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.234 [2024-07-15 16:17:46.401085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.234 [2024-07-15 16:17:46.401090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.401100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.401105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.401115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.401120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.401133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.401138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.401148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.401153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.401163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.401168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.401177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.401182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.401192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.401197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.402030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.402042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.402054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.402063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.402073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.402078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.402088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.402093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.402103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.402108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.402118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.235 [2024-07-15 16:17:46.402127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.402138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.235 [2024-07-15 16:17:46.402143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.402153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.235 [2024-07-15 16:17:46.402158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.402167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.235 [2024-07-15 16:17:46.402172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.402182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.235 [2024-07-15 16:17:46.402187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.402197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.235 [2024-07-15 16:17:46.402202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.402212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.235 [2024-07-15 16:17:46.402217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.402227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.402232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.402242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.402246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.402258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.402263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.403846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.403854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.403865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.403870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.403880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.403886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.403896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.403901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.403911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.403915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.403926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.403931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.403941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.403946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.403956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.403961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.403971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.403976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.403986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.403991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.404002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.404006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.404018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.404023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.404033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.404038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.404049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.404054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.404064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.404070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.404080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.404085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.404095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.404100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.404248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.404256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.404266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.235 [2024-07-15 16:17:46.404271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:25.235 [2024-07-15 16:17:46.404281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.236 [2024-07-15 16:17:46.404867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.236 [2024-07-15 16:17:46.404892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.236 [2024-07-15 16:17:46.404897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.404907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.404912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.404922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.404927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.404937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.404942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.404952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.404957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.404967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.404972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.404982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.404987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.404997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.405823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.405828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.406012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.406019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.406031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.406036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.406046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.406051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.406064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.406070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.406080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.406085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.406096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.237 [2024-07-15 16:17:46.406101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.237 [2024-07-15 16:17:46.406111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.238 [2024-07-15 16:17:46.406483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.238 [2024-07-15 16:17:46.406498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.238 [2024-07-15 16:17:46.406513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.238 [2024-07-15 16:17:46.406530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.238 [2024-07-15 16:17:46.406545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.238 [2024-07-15 16:17:46.406561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.238 [2024-07-15 16:17:46.406575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.238 [2024-07-15 16:17:46.406950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.238 [2024-07-15 16:17:46.406960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.406965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.406974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.406979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.406990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.406995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.407866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.407871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.408206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.408213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.408223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.408229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.408239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.408243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.408253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.408262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.408272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.408277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.408287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.408292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.408302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.408307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.408317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.408322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.408418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.408425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.408435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.408440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.408450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.408455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.239 [2024-07-15 16:17:46.408465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.239 [2024-07-15 16:17:46.408470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.408480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.408485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.408495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.408500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.408510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.408515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.408525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.408531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.408817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.408823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.408834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.408839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.408849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.408853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.408864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.240 [2024-07-15 16:17:46.408868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.408878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.408883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.408893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.408898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.408908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.408913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.408923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.408928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.408939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.408944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.240 [2024-07-15 16:17:46.409981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.240 [2024-07-15 16:17:46.409991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.409996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.410696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.410701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.411118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.411127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.411138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.411143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.411153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.411158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.411168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.411173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.411185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.411190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.411200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.411205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.411215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.241 [2024-07-15 16:17:46.411219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.411230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.241 [2024-07-15 16:17:46.411235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.411245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.241 [2024-07-15 16:17:46.411249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.411259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.241 [2024-07-15 16:17:46.411264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.411274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.241 [2024-07-15 16:17:46.411279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.411289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.241 [2024-07-15 16:17:46.411294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.411304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.241 [2024-07-15 16:17:46.411309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.411319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.411324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.411334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.411339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.413183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.413191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.413205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.413210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.413220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.413225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.413235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.413242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.413252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.413258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.413268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.413273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.241 [2024-07-15 16:17:46.413283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.241 [2024-07-15 16:17:46.413288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.242 [2024-07-15 16:17:46.413967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:25.242 [2024-07-15 16:17:46.413977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.413982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.413992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.413996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.243 [2024-07-15 16:17:46.414224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.243 [2024-07-15 16:17:46.414946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.243 [2024-07-15 16:17:46.414951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.414961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.414965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.414976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.414980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.414991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.414996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.244 [2024-07-15 16:17:46.415881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.244 [2024-07-15 16:17:46.415896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.244 [2024-07-15 16:17:46.415911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.244 [2024-07-15 16:17:46.415927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.244 [2024-07-15 16:17:46.415942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.244 [2024-07-15 16:17:46.415952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.244 [2024-07-15 16:17:46.415958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.415969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.245 [2024-07-15 16:17:46.415974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.415984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.245 [2024-07-15 16:17:46.415989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.415999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.416767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.416772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.417022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.417028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.417039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.417044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.417055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.417060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.417070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.417075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.417085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.417090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.417100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.417105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.417115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.417120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.417135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.417140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.417245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.417253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.417264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.245 [2024-07-15 16:17:46.417269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.245 [2024-07-15 16:17:46.417279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.417924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.417929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.246 [2024-07-15 16:17:46.418326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.246 [2024-07-15 16:17:46.418726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.246 [2024-07-15 16:17:46.418731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.418741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.418746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.418756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.418761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.418771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.418776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.418786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.418791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.418801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.418806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.419986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.419999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.420004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.247 [2024-07-15 16:17:46.420014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.247 [2024-07-15 16:17:46.420019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.420033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.420048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.420063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.420078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.420093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.420527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.420543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.420558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.420572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.420588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.420606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.248 [2024-07-15 16:17:46.420620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.248 [2024-07-15 16:17:46.420635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.248 [2024-07-15 16:17:46.420650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.248 [2024-07-15 16:17:46.420665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.248 [2024-07-15 16:17:46.420681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.248 [2024-07-15 16:17:46.420696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.248 [2024-07-15 16:17:46.420711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.420726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.420738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.420743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.248 [2024-07-15 16:17:46.422795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.248 [2024-07-15 16:17:46.422800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.422811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.422816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.422827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.422832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.422843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.422848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.422859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.422864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.422875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.422880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.422891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.422896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.422907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.422912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.422923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.422928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.422943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.422948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.422959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.422964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.422975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.422980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.422991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.422996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.249 [2024-07-15 16:17:46.423465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.249 [2024-07-15 16:17:46.423476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.249 [2024-07-15 16:17:46.423481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.423979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.423995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.424000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.424014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.424019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.424034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.424039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.424054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.424059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.424074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.424079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.424093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.424098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.424115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.424120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.424158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.424164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.424180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.424185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.424200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.424205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.424220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.424225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.424240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.424245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.424260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.424265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.424281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.424285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.424301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.424306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.424411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.250 [2024-07-15 16:17:46.424417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.250 [2024-07-15 16:17:46.424434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.424439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.424454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.424460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.424475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.424482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.424497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.424503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.424518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.424523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.424539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.424544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.424560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.424566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.424723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.424728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.424745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.424750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.424766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.424771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.424788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.424793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.424809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.424814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.424830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.424835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.424851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.424856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.424872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.424878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.425148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.425154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.425171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.425176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.425193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.425198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.425214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.425219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.425236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.425241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.425257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.425262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.425279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.251 [2024-07-15 16:17:46.425284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.425301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.251 [2024-07-15 16:17:46.425305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.425322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.251 [2024-07-15 16:17:46.425327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.425344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.251 [2024-07-15 16:17:46.425349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.425365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.251 [2024-07-15 16:17:46.425370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.425387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.251 [2024-07-15 16:17:46.425392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.425410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.251 [2024-07-15 16:17:46.425415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.425432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.425437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:46.425454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:46.425459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:58.525646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.251 [2024-07-15 16:17:58.525681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.251 [2024-07-15 16:17:58.525711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.525717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.525727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.525732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.525743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.525748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.525758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.525763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.525773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.525778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.525788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.525793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.525804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.252 [2024-07-15 16:17:58.525809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.527152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.527165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.527182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.527187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.527197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.527202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.527212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.527217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.527227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.527232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.527242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.527247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.527257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.527262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.527272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.527277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.527287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.527292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.527302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.527307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.527317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.527322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.527332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.252 [2024-07-15 16:17:58.527337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.527347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.252 [2024-07-15 16:17:58.527352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.527362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.252 [2024-07-15 16:17:58.527368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.252 [2024-07-15 16:17:58.527379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.252 [2024-07-15 16:17:58.527384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.252 Received shutdown signal, test time was about 25.743376 seconds 00:26:25.252 00:26:25.252 Latency(us) 00:26:25.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.252 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:25.252 Verification LBA range: start 0x0 length 0x4000 00:26:25.252 Nvme0n1 : 25.74 11125.30 43.46 0.00 0.00 11487.01 402.77 3075822.93 00:26:25.252 =================================================================================================================== 00:26:25.252 Total : 11125.30 43.46 0.00 0.00 11487.01 402.77 3075822.93 00:26:25.252 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:25.252 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:25.252 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:25.252 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:25.252 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:25.252 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:25.252 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:25.252 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:25.252 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:25.252 16:18:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:25.252 rmmod nvme_tcp 00:26:25.252 rmmod nvme_fabrics 00:26:25.252 rmmod nvme_keyring 00:26:25.252 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:25.252 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:25.252 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:25.252 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2416932 ']' 00:26:25.252 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2416932 00:26:25.252 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2416932 ']' 00:26:25.252 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2416932 00:26:25.252 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:25.252 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:25.252 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2416932 00:26:25.513 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:25.513 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:25.513 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2416932' 00:26:25.513 killing process with pid 2416932 00:26:25.513 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2416932 00:26:25.513 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2416932 00:26:25.513 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:25.513 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:25.513 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:25.513 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:25.513 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:25.513 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.513 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.513 16:18:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.056 16:18:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:28.056 00:26:28.056 real 0m39.144s 00:26:28.056 user 1m41.101s 00:26:28.056 sys 0m10.730s 00:26:28.056 16:18:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:28.056 16:18:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:28.056 ************************************ 00:26:28.056 END TEST nvmf_host_multipath_status 00:26:28.056 ************************************ 00:26:28.056 16:18:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:28.057 16:18:03 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:28.057 16:18:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:28.057 16:18:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.057 16:18:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:28.057 ************************************ 00:26:28.057 START TEST nvmf_discovery_remove_ifc 00:26:28.057 ************************************ 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:28.057 * Looking for test storage... 00:26:28.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:28.057 16:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:34.692 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:34.693 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:34.693 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:34.693 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:34.693 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:34.693 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:34.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:34.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:26:34.954 00:26:34.954 --- 10.0.0.2 ping statistics --- 00:26:34.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.954 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:34.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:34.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:26:34.954 00:26:34.954 --- 10.0.0.1 ping statistics --- 00:26:34.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.954 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2427569 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2427569 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2427569 ']' 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:34.954 16:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.216 [2024-07-15 16:18:10.838216] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:26:35.216 [2024-07-15 16:18:10.838280] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.216 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.216 [2024-07-15 16:18:10.927757] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.216 [2024-07-15 16:18:11.020963] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.216 [2024-07-15 16:18:11.021018] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.216 [2024-07-15 16:18:11.021026] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.216 [2024-07-15 16:18:11.021033] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.216 [2024-07-15 16:18:11.021039] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.216 [2024-07-15 16:18:11.021065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.788 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:35.788 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:35.788 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:36.048 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:36.048 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.048 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.048 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:36.048 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.048 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.048 [2024-07-15 16:18:11.686078] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.048 [2024-07-15 16:18:11.694292] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:36.048 null0 00:26:36.048 [2024-07-15 16:18:11.726282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.048 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.048 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2427743 00:26:36.048 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2427743 /tmp/host.sock 00:26:36.048 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:36.048 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2427743 ']' 00:26:36.048 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:36.048 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:36.048 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:36.048 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:36.048 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:36.048 16:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.048 [2024-07-15 16:18:11.808596] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:26:36.048 [2024-07-15 16:18:11.808658] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427743 ] 00:26:36.048 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.048 [2024-07-15 16:18:11.871860] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.309 [2024-07-15 16:18:11.945939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.881 16:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:36.881 16:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:36.881 16:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:36.881 16:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:36.881 16:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.881 16:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.881 16:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.881 16:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:36.881 16:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.881 16:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.881 16:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.881 16:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:36.881 16:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.881 16:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.265 [2024-07-15 16:18:13.706339] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:38.265 [2024-07-15 16:18:13.706362] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:38.265 [2024-07-15 16:18:13.706378] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:38.265 [2024-07-15 16:18:13.794667] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:38.265 [2024-07-15 16:18:13.896413] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:38.265 [2024-07-15 16:18:13.896466] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:38.265 [2024-07-15 16:18:13.896489] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:38.265 [2024-07-15 16:18:13.896503] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:38.265 [2024-07-15 16:18:13.896523] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:38.265 16:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.265 16:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:38.265 16:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.265 16:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.265 16:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.265 16:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.265 16:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.265 16:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.265 16:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.265 [2024-07-15 16:18:13.904429] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x12997b0 was disconnected and freed. delete nvme_qpair. 00:26:38.265 16:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.265 16:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:38.265 16:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:38.265 16:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:38.265 16:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:38.265 16:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.265 16:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.265 16:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.265 16:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.265 16:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.265 16:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.265 16:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.265 16:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.524 16:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:38.524 16:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:39.466 16:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:39.466 16:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.466 16:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:39.466 16:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.466 16:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:39.466 16:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.466 16:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.466 16:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.466 16:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:39.466 16:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.407 16:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.407 16:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.407 16:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.407 16:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.407 16:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.407 16:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.407 16:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.407 16:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.407 16:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:40.407 16:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:41.786 16:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.786 16:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.786 16:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.786 16:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.786 16:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.786 16:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.786 16:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.786 16:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.786 16:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:41.786 16:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:42.727 16:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.727 16:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.727 16:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.727 16:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.727 16:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.727 16:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.727 16:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.727 16:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.727 16:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:42.727 16:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:43.667 [2024-07-15 16:18:19.336781] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:43.667 [2024-07-15 16:18:19.336821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.667 [2024-07-15 16:18:19.336833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.667 [2024-07-15 16:18:19.336843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.667 [2024-07-15 16:18:19.336850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.667 [2024-07-15 16:18:19.336858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.667 [2024-07-15 16:18:19.336870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.668 [2024-07-15 16:18:19.336878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.668 [2024-07-15 16:18:19.336885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.668 [2024-07-15 16:18:19.336893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.668 [2024-07-15 16:18:19.336900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.668 [2024-07-15 16:18:19.336907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1260040 is same with the state(5) to be set 00:26:43.668 [2024-07-15 16:18:19.346802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1260040 (9): Bad file descriptor 00:26:43.668 [2024-07-15 16:18:19.356841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:43.668 16:18:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.668 16:18:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.668 16:18:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.668 16:18:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.668 16:18:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.668 16:18:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.668 16:18:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:44.610 [2024-07-15 16:18:20.361151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:44.610 [2024-07-15 16:18:20.361197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1260040 with addr=10.0.0.2, port=4420 00:26:44.610 [2024-07-15 16:18:20.361210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1260040 is same with the state(5) to be set 00:26:44.610 [2024-07-15 16:18:20.361239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1260040 (9): Bad file descriptor 00:26:44.610 [2024-07-15 16:18:20.361620] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.610 [2024-07-15 16:18:20.361639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:44.610 [2024-07-15 16:18:20.361646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:44.610 [2024-07-15 16:18:20.361655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:44.610 [2024-07-15 16:18:20.361674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.610 [2024-07-15 16:18:20.361683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:44.610 16:18:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.610 16:18:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:44.610 16:18:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:45.549 [2024-07-15 16:18:21.364060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:45.549 [2024-07-15 16:18:21.364081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:45.549 [2024-07-15 16:18:21.364088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:45.549 [2024-07-15 16:18:21.364103] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:45.549 [2024-07-15 16:18:21.364116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.549 [2024-07-15 16:18:21.364137] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:45.549 [2024-07-15 16:18:21.364159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.549 [2024-07-15 16:18:21.364170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.549 [2024-07-15 16:18:21.364180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.549 [2024-07-15 16:18:21.364187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.549 [2024-07-15 16:18:21.364195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.549 [2024-07-15 16:18:21.364202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.549 [2024-07-15 16:18:21.364211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.549 [2024-07-15 16:18:21.364218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.549 [2024-07-15 16:18:21.364226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.549 [2024-07-15 16:18:21.364233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.549 [2024-07-15 16:18:21.364240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:45.549 [2024-07-15 16:18:21.364607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125f4c0 (9): Bad file descriptor 00:26:45.549 [2024-07-15 16:18:21.365618] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:45.549 [2024-07-15 16:18:21.365628] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.808 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.809 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:45.809 16:18:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:47.191 16:18:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:47.191 16:18:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.191 16:18:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:47.191 16:18:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.191 16:18:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:47.191 16:18:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.191 16:18:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:47.191 16:18:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.191 16:18:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:47.191 16:18:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:47.767 [2024-07-15 16:18:23.420290] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:47.767 [2024-07-15 16:18:23.420307] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:47.767 [2024-07-15 16:18:23.420320] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:47.767 [2024-07-15 16:18:23.547751] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:48.027 [2024-07-15 16:18:23.649569] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:48.027 [2024-07-15 16:18:23.649606] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:48.027 [2024-07-15 16:18:23.649627] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:48.027 [2024-07-15 16:18:23.649639] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:48.027 [2024-07-15 16:18:23.649647] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:48.027 [2024-07-15 16:18:23.658351] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1276310 was disconnected and freed. delete nvme_qpair. 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2427743 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2427743 ']' 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2427743 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2427743 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2427743' 00:26:48.027 killing process with pid 2427743 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2427743 00:26:48.027 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2427743 00:26:48.286 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:48.286 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:48.286 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:48.286 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:48.286 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:48.286 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:48.286 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:48.286 rmmod nvme_tcp 00:26:48.286 rmmod nvme_fabrics 00:26:48.286 rmmod nvme_keyring 00:26:48.286 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:48.287 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:48.287 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:48.287 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2427569 ']' 00:26:48.287 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2427569 00:26:48.287 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2427569 ']' 00:26:48.287 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2427569 00:26:48.287 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:48.287 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:48.287 16:18:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2427569 00:26:48.287 16:18:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:48.287 16:18:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:48.287 16:18:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2427569' 00:26:48.287 killing process with pid 2427569 00:26:48.287 16:18:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2427569 00:26:48.287 16:18:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2427569 00:26:48.287 16:18:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:48.287 16:18:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:48.287 16:18:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:48.287 16:18:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:48.547 16:18:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:48.547 16:18:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.547 16:18:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:48.547 16:18:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.457 16:18:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:50.457 00:26:50.457 real 0m22.821s 00:26:50.457 user 0m26.962s 00:26:50.457 sys 0m6.693s 00:26:50.457 16:18:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:50.457 16:18:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.457 ************************************ 00:26:50.457 END TEST nvmf_discovery_remove_ifc 00:26:50.457 ************************************ 00:26:50.457 16:18:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:50.457 16:18:26 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:50.457 16:18:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:50.457 16:18:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:50.457 16:18:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:50.458 ************************************ 00:26:50.458 START TEST nvmf_identify_kernel_target 00:26:50.458 ************************************ 00:26:50.458 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:50.719 * Looking for test storage... 00:26:50.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.719 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:50.720 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.720 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:50.720 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:50.720 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:50.720 16:18:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:57.369 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:57.369 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:57.369 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:57.369 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:57.369 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:57.369 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:57.369 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:57.369 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:57.369 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:57.369 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:57.369 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:57.369 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:57.369 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:57.369 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:57.369 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:57.370 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:57.370 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:57.370 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:57.370 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:57.370 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:57.630 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:57.630 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:57.630 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:57.630 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:57.630 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:57.630 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:57.630 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:57.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:57.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:26:57.630 00:26:57.630 --- 10.0.0.2 ping statistics --- 00:26:57.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.630 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:26:57.630 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:57.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:57.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.373 ms 00:26:57.630 00:26:57.630 --- 10.0.0.1 ping statistics --- 00:26:57.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.630 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:26:57.630 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:57.630 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:57.630 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:57.630 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:57.630 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:57.630 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:57.630 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:57.630 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:57.630 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:57.890 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:57.890 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:57.890 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:57.890 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.890 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.890 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.890 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.890 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.891 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.891 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.891 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.891 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.891 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:57.891 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:57.891 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:57.891 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:57.891 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:57.891 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:57.891 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:57.891 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:57.891 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:57.891 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:57.891 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:57.891 16:18:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:01.191 Waiting for block devices as requested 00:27:01.191 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:01.191 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:01.452 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:01.452 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:01.452 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:01.452 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:01.711 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:01.711 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:01.711 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:01.979 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:01.979 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:02.240 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:02.240 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:02.240 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:02.240 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:02.499 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:02.499 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:02.759 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:02.759 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:02.759 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:02.759 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:02.759 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:02.759 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:02.759 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:02.759 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:02.760 No valid GPT data, bailing 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:02.760 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:03.021 00:27:03.021 Discovery Log Number of Records 2, Generation counter 2 00:27:03.021 =====Discovery Log Entry 0====== 00:27:03.021 trtype: tcp 00:27:03.021 adrfam: ipv4 00:27:03.021 subtype: current discovery subsystem 00:27:03.021 treq: not specified, sq flow control disable supported 00:27:03.021 portid: 1 00:27:03.021 trsvcid: 4420 00:27:03.021 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:03.021 traddr: 10.0.0.1 00:27:03.021 eflags: none 00:27:03.021 sectype: none 00:27:03.021 =====Discovery Log Entry 1====== 00:27:03.021 trtype: tcp 00:27:03.021 adrfam: ipv4 00:27:03.021 subtype: nvme subsystem 00:27:03.021 treq: not specified, sq flow control disable supported 00:27:03.021 portid: 1 00:27:03.021 trsvcid: 4420 00:27:03.021 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:03.021 traddr: 10.0.0.1 00:27:03.021 eflags: none 00:27:03.021 sectype: none 00:27:03.021 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:03.021 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:03.021 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.021 ===================================================== 00:27:03.021 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:03.021 ===================================================== 00:27:03.021 Controller Capabilities/Features 00:27:03.021 ================================ 00:27:03.021 Vendor ID: 0000 00:27:03.021 Subsystem Vendor ID: 0000 00:27:03.021 Serial Number: 0942d6e08a36d536603d 00:27:03.021 Model Number: Linux 00:27:03.021 Firmware Version: 6.7.0-68 00:27:03.021 Recommended Arb Burst: 0 00:27:03.021 IEEE OUI Identifier: 00 00 00 00:27:03.021 Multi-path I/O 00:27:03.021 May have multiple subsystem ports: No 00:27:03.021 May have multiple controllers: No 00:27:03.021 Associated with SR-IOV VF: No 00:27:03.021 Max Data Transfer Size: Unlimited 00:27:03.021 Max Number of Namespaces: 0 00:27:03.021 Max Number of I/O Queues: 1024 00:27:03.021 NVMe Specification Version (VS): 1.3 00:27:03.021 NVMe Specification Version (Identify): 1.3 00:27:03.021 Maximum Queue Entries: 1024 00:27:03.021 Contiguous Queues Required: No 00:27:03.021 Arbitration Mechanisms Supported 00:27:03.021 Weighted Round Robin: Not Supported 00:27:03.021 Vendor Specific: Not Supported 00:27:03.021 Reset Timeout: 7500 ms 00:27:03.021 Doorbell Stride: 4 bytes 00:27:03.021 NVM Subsystem Reset: Not Supported 00:27:03.021 Command Sets Supported 00:27:03.021 NVM Command Set: Supported 00:27:03.021 Boot Partition: Not Supported 00:27:03.021 Memory Page Size Minimum: 4096 bytes 00:27:03.021 Memory Page Size Maximum: 4096 bytes 00:27:03.021 Persistent Memory Region: Not Supported 00:27:03.021 Optional Asynchronous Events Supported 00:27:03.021 Namespace Attribute Notices: Not Supported 00:27:03.021 Firmware Activation Notices: Not Supported 00:27:03.021 ANA Change Notices: Not Supported 00:27:03.021 PLE Aggregate Log Change Notices: Not Supported 00:27:03.021 LBA Status Info Alert Notices: Not Supported 00:27:03.021 EGE Aggregate Log Change Notices: Not Supported 00:27:03.021 Normal NVM Subsystem Shutdown event: Not Supported 00:27:03.021 Zone Descriptor Change Notices: Not Supported 00:27:03.021 Discovery Log Change Notices: Supported 00:27:03.021 Controller Attributes 00:27:03.021 128-bit Host Identifier: Not Supported 00:27:03.021 Non-Operational Permissive Mode: Not Supported 00:27:03.021 NVM Sets: Not Supported 00:27:03.021 Read Recovery Levels: Not Supported 00:27:03.021 Endurance Groups: Not Supported 00:27:03.021 Predictable Latency Mode: Not Supported 00:27:03.021 Traffic Based Keep ALive: Not Supported 00:27:03.021 Namespace Granularity: Not Supported 00:27:03.021 SQ Associations: Not Supported 00:27:03.021 UUID List: Not Supported 00:27:03.021 Multi-Domain Subsystem: Not Supported 00:27:03.021 Fixed Capacity Management: Not Supported 00:27:03.021 Variable Capacity Management: Not Supported 00:27:03.021 Delete Endurance Group: Not Supported 00:27:03.021 Delete NVM Set: Not Supported 00:27:03.021 Extended LBA Formats Supported: Not Supported 00:27:03.021 Flexible Data Placement Supported: Not Supported 00:27:03.021 00:27:03.022 Controller Memory Buffer Support 00:27:03.022 ================================ 00:27:03.022 Supported: No 00:27:03.022 00:27:03.022 Persistent Memory Region Support 00:27:03.022 ================================ 00:27:03.022 Supported: No 00:27:03.022 00:27:03.022 Admin Command Set Attributes 00:27:03.022 ============================ 00:27:03.022 Security Send/Receive: Not Supported 00:27:03.022 Format NVM: Not Supported 00:27:03.022 Firmware Activate/Download: Not Supported 00:27:03.022 Namespace Management: Not Supported 00:27:03.022 Device Self-Test: Not Supported 00:27:03.022 Directives: Not Supported 00:27:03.022 NVMe-MI: Not Supported 00:27:03.022 Virtualization Management: Not Supported 00:27:03.022 Doorbell Buffer Config: Not Supported 00:27:03.022 Get LBA Status Capability: Not Supported 00:27:03.022 Command & Feature Lockdown Capability: Not Supported 00:27:03.022 Abort Command Limit: 1 00:27:03.022 Async Event Request Limit: 1 00:27:03.022 Number of Firmware Slots: N/A 00:27:03.022 Firmware Slot 1 Read-Only: N/A 00:27:03.022 Firmware Activation Without Reset: N/A 00:27:03.022 Multiple Update Detection Support: N/A 00:27:03.022 Firmware Update Granularity: No Information Provided 00:27:03.022 Per-Namespace SMART Log: No 00:27:03.022 Asymmetric Namespace Access Log Page: Not Supported 00:27:03.022 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:03.022 Command Effects Log Page: Not Supported 00:27:03.022 Get Log Page Extended Data: Supported 00:27:03.022 Telemetry Log Pages: Not Supported 00:27:03.022 Persistent Event Log Pages: Not Supported 00:27:03.022 Supported Log Pages Log Page: May Support 00:27:03.022 Commands Supported & Effects Log Page: Not Supported 00:27:03.022 Feature Identifiers & Effects Log Page:May Support 00:27:03.022 NVMe-MI Commands & Effects Log Page: May Support 00:27:03.022 Data Area 4 for Telemetry Log: Not Supported 00:27:03.022 Error Log Page Entries Supported: 1 00:27:03.022 Keep Alive: Not Supported 00:27:03.022 00:27:03.022 NVM Command Set Attributes 00:27:03.022 ========================== 00:27:03.022 Submission Queue Entry Size 00:27:03.022 Max: 1 00:27:03.022 Min: 1 00:27:03.022 Completion Queue Entry Size 00:27:03.022 Max: 1 00:27:03.022 Min: 1 00:27:03.022 Number of Namespaces: 0 00:27:03.022 Compare Command: Not Supported 00:27:03.022 Write Uncorrectable Command: Not Supported 00:27:03.022 Dataset Management Command: Not Supported 00:27:03.022 Write Zeroes Command: Not Supported 00:27:03.022 Set Features Save Field: Not Supported 00:27:03.022 Reservations: Not Supported 00:27:03.022 Timestamp: Not Supported 00:27:03.022 Copy: Not Supported 00:27:03.022 Volatile Write Cache: Not Present 00:27:03.022 Atomic Write Unit (Normal): 1 00:27:03.022 Atomic Write Unit (PFail): 1 00:27:03.022 Atomic Compare & Write Unit: 1 00:27:03.022 Fused Compare & Write: Not Supported 00:27:03.022 Scatter-Gather List 00:27:03.022 SGL Command Set: Supported 00:27:03.022 SGL Keyed: Not Supported 00:27:03.022 SGL Bit Bucket Descriptor: Not Supported 00:27:03.022 SGL Metadata Pointer: Not Supported 00:27:03.022 Oversized SGL: Not Supported 00:27:03.022 SGL Metadata Address: Not Supported 00:27:03.022 SGL Offset: Supported 00:27:03.022 Transport SGL Data Block: Not Supported 00:27:03.022 Replay Protected Memory Block: Not Supported 00:27:03.022 00:27:03.022 Firmware Slot Information 00:27:03.022 ========================= 00:27:03.022 Active slot: 0 00:27:03.022 00:27:03.022 00:27:03.022 Error Log 00:27:03.022 ========= 00:27:03.022 00:27:03.022 Active Namespaces 00:27:03.022 ================= 00:27:03.022 Discovery Log Page 00:27:03.022 ================== 00:27:03.022 Generation Counter: 2 00:27:03.022 Number of Records: 2 00:27:03.022 Record Format: 0 00:27:03.022 00:27:03.022 Discovery Log Entry 0 00:27:03.022 ---------------------- 00:27:03.022 Transport Type: 3 (TCP) 00:27:03.022 Address Family: 1 (IPv4) 00:27:03.022 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:03.022 Entry Flags: 00:27:03.022 Duplicate Returned Information: 0 00:27:03.022 Explicit Persistent Connection Support for Discovery: 0 00:27:03.022 Transport Requirements: 00:27:03.022 Secure Channel: Not Specified 00:27:03.022 Port ID: 1 (0x0001) 00:27:03.022 Controller ID: 65535 (0xffff) 00:27:03.022 Admin Max SQ Size: 32 00:27:03.022 Transport Service Identifier: 4420 00:27:03.022 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:03.022 Transport Address: 10.0.0.1 00:27:03.022 Discovery Log Entry 1 00:27:03.022 ---------------------- 00:27:03.022 Transport Type: 3 (TCP) 00:27:03.022 Address Family: 1 (IPv4) 00:27:03.022 Subsystem Type: 2 (NVM Subsystem) 00:27:03.022 Entry Flags: 00:27:03.022 Duplicate Returned Information: 0 00:27:03.022 Explicit Persistent Connection Support for Discovery: 0 00:27:03.022 Transport Requirements: 00:27:03.022 Secure Channel: Not Specified 00:27:03.022 Port ID: 1 (0x0001) 00:27:03.022 Controller ID: 65535 (0xffff) 00:27:03.022 Admin Max SQ Size: 32 00:27:03.022 Transport Service Identifier: 4420 00:27:03.022 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:03.022 Transport Address: 10.0.0.1 00:27:03.022 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:03.022 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.022 get_feature(0x01) failed 00:27:03.022 get_feature(0x02) failed 00:27:03.022 get_feature(0x04) failed 00:27:03.022 ===================================================== 00:27:03.022 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:03.022 ===================================================== 00:27:03.022 Controller Capabilities/Features 00:27:03.022 ================================ 00:27:03.022 Vendor ID: 0000 00:27:03.022 Subsystem Vendor ID: 0000 00:27:03.022 Serial Number: 13f6e99ff70725836982 00:27:03.023 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:03.023 Firmware Version: 6.7.0-68 00:27:03.023 Recommended Arb Burst: 6 00:27:03.023 IEEE OUI Identifier: 00 00 00 00:27:03.023 Multi-path I/O 00:27:03.023 May have multiple subsystem ports: Yes 00:27:03.023 May have multiple controllers: Yes 00:27:03.023 Associated with SR-IOV VF: No 00:27:03.023 Max Data Transfer Size: Unlimited 00:27:03.023 Max Number of Namespaces: 1024 00:27:03.023 Max Number of I/O Queues: 128 00:27:03.023 NVMe Specification Version (VS): 1.3 00:27:03.023 NVMe Specification Version (Identify): 1.3 00:27:03.023 Maximum Queue Entries: 1024 00:27:03.023 Contiguous Queues Required: No 00:27:03.023 Arbitration Mechanisms Supported 00:27:03.023 Weighted Round Robin: Not Supported 00:27:03.023 Vendor Specific: Not Supported 00:27:03.023 Reset Timeout: 7500 ms 00:27:03.023 Doorbell Stride: 4 bytes 00:27:03.023 NVM Subsystem Reset: Not Supported 00:27:03.023 Command Sets Supported 00:27:03.023 NVM Command Set: Supported 00:27:03.023 Boot Partition: Not Supported 00:27:03.023 Memory Page Size Minimum: 4096 bytes 00:27:03.023 Memory Page Size Maximum: 4096 bytes 00:27:03.023 Persistent Memory Region: Not Supported 00:27:03.023 Optional Asynchronous Events Supported 00:27:03.023 Namespace Attribute Notices: Supported 00:27:03.023 Firmware Activation Notices: Not Supported 00:27:03.023 ANA Change Notices: Supported 00:27:03.023 PLE Aggregate Log Change Notices: Not Supported 00:27:03.023 LBA Status Info Alert Notices: Not Supported 00:27:03.023 EGE Aggregate Log Change Notices: Not Supported 00:27:03.023 Normal NVM Subsystem Shutdown event: Not Supported 00:27:03.023 Zone Descriptor Change Notices: Not Supported 00:27:03.023 Discovery Log Change Notices: Not Supported 00:27:03.023 Controller Attributes 00:27:03.023 128-bit Host Identifier: Supported 00:27:03.023 Non-Operational Permissive Mode: Not Supported 00:27:03.023 NVM Sets: Not Supported 00:27:03.023 Read Recovery Levels: Not Supported 00:27:03.023 Endurance Groups: Not Supported 00:27:03.023 Predictable Latency Mode: Not Supported 00:27:03.023 Traffic Based Keep ALive: Supported 00:27:03.023 Namespace Granularity: Not Supported 00:27:03.023 SQ Associations: Not Supported 00:27:03.023 UUID List: Not Supported 00:27:03.023 Multi-Domain Subsystem: Not Supported 00:27:03.023 Fixed Capacity Management: Not Supported 00:27:03.023 Variable Capacity Management: Not Supported 00:27:03.023 Delete Endurance Group: Not Supported 00:27:03.023 Delete NVM Set: Not Supported 00:27:03.023 Extended LBA Formats Supported: Not Supported 00:27:03.023 Flexible Data Placement Supported: Not Supported 00:27:03.023 00:27:03.023 Controller Memory Buffer Support 00:27:03.023 ================================ 00:27:03.023 Supported: No 00:27:03.023 00:27:03.023 Persistent Memory Region Support 00:27:03.023 ================================ 00:27:03.023 Supported: No 00:27:03.023 00:27:03.023 Admin Command Set Attributes 00:27:03.023 ============================ 00:27:03.023 Security Send/Receive: Not Supported 00:27:03.023 Format NVM: Not Supported 00:27:03.023 Firmware Activate/Download: Not Supported 00:27:03.023 Namespace Management: Not Supported 00:27:03.023 Device Self-Test: Not Supported 00:27:03.023 Directives: Not Supported 00:27:03.023 NVMe-MI: Not Supported 00:27:03.023 Virtualization Management: Not Supported 00:27:03.023 Doorbell Buffer Config: Not Supported 00:27:03.023 Get LBA Status Capability: Not Supported 00:27:03.023 Command & Feature Lockdown Capability: Not Supported 00:27:03.023 Abort Command Limit: 4 00:27:03.023 Async Event Request Limit: 4 00:27:03.023 Number of Firmware Slots: N/A 00:27:03.023 Firmware Slot 1 Read-Only: N/A 00:27:03.023 Firmware Activation Without Reset: N/A 00:27:03.023 Multiple Update Detection Support: N/A 00:27:03.023 Firmware Update Granularity: No Information Provided 00:27:03.023 Per-Namespace SMART Log: Yes 00:27:03.023 Asymmetric Namespace Access Log Page: Supported 00:27:03.023 ANA Transition Time : 10 sec 00:27:03.023 00:27:03.023 Asymmetric Namespace Access Capabilities 00:27:03.023 ANA Optimized State : Supported 00:27:03.023 ANA Non-Optimized State : Supported 00:27:03.023 ANA Inaccessible State : Supported 00:27:03.023 ANA Persistent Loss State : Supported 00:27:03.023 ANA Change State : Supported 00:27:03.023 ANAGRPID is not changed : No 00:27:03.023 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:03.023 00:27:03.023 ANA Group Identifier Maximum : 128 00:27:03.023 Number of ANA Group Identifiers : 128 00:27:03.023 Max Number of Allowed Namespaces : 1024 00:27:03.023 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:03.023 Command Effects Log Page: Supported 00:27:03.023 Get Log Page Extended Data: Supported 00:27:03.023 Telemetry Log Pages: Not Supported 00:27:03.023 Persistent Event Log Pages: Not Supported 00:27:03.023 Supported Log Pages Log Page: May Support 00:27:03.023 Commands Supported & Effects Log Page: Not Supported 00:27:03.023 Feature Identifiers & Effects Log Page:May Support 00:27:03.023 NVMe-MI Commands & Effects Log Page: May Support 00:27:03.023 Data Area 4 for Telemetry Log: Not Supported 00:27:03.023 Error Log Page Entries Supported: 128 00:27:03.023 Keep Alive: Supported 00:27:03.023 Keep Alive Granularity: 1000 ms 00:27:03.023 00:27:03.023 NVM Command Set Attributes 00:27:03.023 ========================== 00:27:03.023 Submission Queue Entry Size 00:27:03.023 Max: 64 00:27:03.023 Min: 64 00:27:03.023 Completion Queue Entry Size 00:27:03.023 Max: 16 00:27:03.023 Min: 16 00:27:03.023 Number of Namespaces: 1024 00:27:03.023 Compare Command: Not Supported 00:27:03.023 Write Uncorrectable Command: Not Supported 00:27:03.023 Dataset Management Command: Supported 00:27:03.023 Write Zeroes Command: Supported 00:27:03.023 Set Features Save Field: Not Supported 00:27:03.023 Reservations: Not Supported 00:27:03.023 Timestamp: Not Supported 00:27:03.023 Copy: Not Supported 00:27:03.023 Volatile Write Cache: Present 00:27:03.023 Atomic Write Unit (Normal): 1 00:27:03.023 Atomic Write Unit (PFail): 1 00:27:03.023 Atomic Compare & Write Unit: 1 00:27:03.023 Fused Compare & Write: Not Supported 00:27:03.023 Scatter-Gather List 00:27:03.023 SGL Command Set: Supported 00:27:03.023 SGL Keyed: Not Supported 00:27:03.023 SGL Bit Bucket Descriptor: Not Supported 00:27:03.023 SGL Metadata Pointer: Not Supported 00:27:03.023 Oversized SGL: Not Supported 00:27:03.023 SGL Metadata Address: Not Supported 00:27:03.023 SGL Offset: Supported 00:27:03.023 Transport SGL Data Block: Not Supported 00:27:03.023 Replay Protected Memory Block: Not Supported 00:27:03.023 00:27:03.023 Firmware Slot Information 00:27:03.023 ========================= 00:27:03.024 Active slot: 0 00:27:03.024 00:27:03.024 Asymmetric Namespace Access 00:27:03.024 =========================== 00:27:03.024 Change Count : 0 00:27:03.024 Number of ANA Group Descriptors : 1 00:27:03.024 ANA Group Descriptor : 0 00:27:03.024 ANA Group ID : 1 00:27:03.024 Number of NSID Values : 1 00:27:03.024 Change Count : 0 00:27:03.024 ANA State : 1 00:27:03.024 Namespace Identifier : 1 00:27:03.024 00:27:03.024 Commands Supported and Effects 00:27:03.024 ============================== 00:27:03.024 Admin Commands 00:27:03.024 -------------- 00:27:03.024 Get Log Page (02h): Supported 00:27:03.024 Identify (06h): Supported 00:27:03.024 Abort (08h): Supported 00:27:03.024 Set Features (09h): Supported 00:27:03.024 Get Features (0Ah): Supported 00:27:03.024 Asynchronous Event Request (0Ch): Supported 00:27:03.024 Keep Alive (18h): Supported 00:27:03.024 I/O Commands 00:27:03.024 ------------ 00:27:03.024 Flush (00h): Supported 00:27:03.024 Write (01h): Supported LBA-Change 00:27:03.024 Read (02h): Supported 00:27:03.024 Write Zeroes (08h): Supported LBA-Change 00:27:03.024 Dataset Management (09h): Supported 00:27:03.024 00:27:03.024 Error Log 00:27:03.024 ========= 00:27:03.024 Entry: 0 00:27:03.024 Error Count: 0x3 00:27:03.024 Submission Queue Id: 0x0 00:27:03.024 Command Id: 0x5 00:27:03.024 Phase Bit: 0 00:27:03.024 Status Code: 0x2 00:27:03.024 Status Code Type: 0x0 00:27:03.024 Do Not Retry: 1 00:27:03.024 Error Location: 0x28 00:27:03.024 LBA: 0x0 00:27:03.024 Namespace: 0x0 00:27:03.024 Vendor Log Page: 0x0 00:27:03.024 ----------- 00:27:03.024 Entry: 1 00:27:03.024 Error Count: 0x2 00:27:03.024 Submission Queue Id: 0x0 00:27:03.024 Command Id: 0x5 00:27:03.024 Phase Bit: 0 00:27:03.024 Status Code: 0x2 00:27:03.024 Status Code Type: 0x0 00:27:03.024 Do Not Retry: 1 00:27:03.024 Error Location: 0x28 00:27:03.024 LBA: 0x0 00:27:03.024 Namespace: 0x0 00:27:03.024 Vendor Log Page: 0x0 00:27:03.024 ----------- 00:27:03.024 Entry: 2 00:27:03.024 Error Count: 0x1 00:27:03.024 Submission Queue Id: 0x0 00:27:03.024 Command Id: 0x4 00:27:03.024 Phase Bit: 0 00:27:03.024 Status Code: 0x2 00:27:03.024 Status Code Type: 0x0 00:27:03.024 Do Not Retry: 1 00:27:03.024 Error Location: 0x28 00:27:03.024 LBA: 0x0 00:27:03.024 Namespace: 0x0 00:27:03.024 Vendor Log Page: 0x0 00:27:03.024 00:27:03.024 Number of Queues 00:27:03.024 ================ 00:27:03.024 Number of I/O Submission Queues: 128 00:27:03.024 Number of I/O Completion Queues: 128 00:27:03.024 00:27:03.024 ZNS Specific Controller Data 00:27:03.024 ============================ 00:27:03.024 Zone Append Size Limit: 0 00:27:03.024 00:27:03.024 00:27:03.024 Active Namespaces 00:27:03.024 ================= 00:27:03.024 get_feature(0x05) failed 00:27:03.024 Namespace ID:1 00:27:03.024 Command Set Identifier: NVM (00h) 00:27:03.024 Deallocate: Supported 00:27:03.024 Deallocated/Unwritten Error: Not Supported 00:27:03.024 Deallocated Read Value: Unknown 00:27:03.024 Deallocate in Write Zeroes: Not Supported 00:27:03.024 Deallocated Guard Field: 0xFFFF 00:27:03.024 Flush: Supported 00:27:03.024 Reservation: Not Supported 00:27:03.024 Namespace Sharing Capabilities: Multiple Controllers 00:27:03.024 Size (in LBAs): 3750748848 (1788GiB) 00:27:03.024 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:03.024 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:03.024 UUID: c71061b5-2e00-4b11-bbb6-4a811b2d0384 00:27:03.024 Thin Provisioning: Not Supported 00:27:03.024 Per-NS Atomic Units: Yes 00:27:03.024 Atomic Write Unit (Normal): 8 00:27:03.024 Atomic Write Unit (PFail): 8 00:27:03.024 Preferred Write Granularity: 8 00:27:03.024 Atomic Compare & Write Unit: 8 00:27:03.024 Atomic Boundary Size (Normal): 0 00:27:03.024 Atomic Boundary Size (PFail): 0 00:27:03.024 Atomic Boundary Offset: 0 00:27:03.024 NGUID/EUI64 Never Reused: No 00:27:03.024 ANA group ID: 1 00:27:03.024 Namespace Write Protected: No 00:27:03.024 Number of LBA Formats: 1 00:27:03.024 Current LBA Format: LBA Format #00 00:27:03.024 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:03.024 00:27:03.024 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:03.024 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:03.024 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:03.024 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:03.024 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:03.024 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:03.024 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:03.024 rmmod nvme_tcp 00:27:03.024 rmmod nvme_fabrics 00:27:03.285 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:03.285 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:03.285 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:03.285 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:03.285 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:03.285 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:03.285 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:03.285 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.285 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:03.285 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.285 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.285 16:18:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.194 16:18:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:05.194 16:18:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:05.194 16:18:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:05.194 16:18:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:05.194 16:18:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:05.194 16:18:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:05.194 16:18:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:05.194 16:18:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:05.194 16:18:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:05.194 16:18:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:05.194 16:18:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:09.397 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:09.397 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:09.397 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:09.397 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:09.397 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:09.397 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:09.397 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:09.397 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:09.397 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:09.397 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:09.397 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:09.397 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:09.397 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:09.397 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:09.397 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:09.397 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:09.397 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:09.397 00:27:09.397 real 0m18.715s 00:27:09.397 user 0m5.154s 00:27:09.397 sys 0m10.534s 00:27:09.397 16:18:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:09.397 16:18:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:09.397 ************************************ 00:27:09.397 END TEST nvmf_identify_kernel_target 00:27:09.397 ************************************ 00:27:09.397 16:18:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:09.397 16:18:45 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:09.397 16:18:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:09.397 16:18:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:09.397 16:18:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:09.397 ************************************ 00:27:09.397 START TEST nvmf_auth_host 00:27:09.397 ************************************ 00:27:09.397 16:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:09.397 * Looking for test storage... 00:27:09.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:09.398 16:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:17.541 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:17.541 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:17.541 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:17.541 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:17.541 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:17.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:27:17.542 00:27:17.542 --- 10.0.0.2 ping statistics --- 00:27:17.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.542 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:17.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.384 ms 00:27:17.542 00:27:17.542 --- 10.0.0.1 ping statistics --- 00:27:17.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.542 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2441856 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2441856 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2441856 ']' 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:17.542 16:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=76728b0f10aaf6ccda8f6c421fb36432 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Yy0 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 76728b0f10aaf6ccda8f6c421fb36432 0 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 76728b0f10aaf6ccda8f6c421fb36432 0 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=76728b0f10aaf6ccda8f6c421fb36432 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Yy0 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Yy0 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Yy0 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=93480cc0794aafba54bfe6d1c4eac70645d77875fd1ae9538f9205773a317936 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Krd 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 93480cc0794aafba54bfe6d1c4eac70645d77875fd1ae9538f9205773a317936 3 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 93480cc0794aafba54bfe6d1c4eac70645d77875fd1ae9538f9205773a317936 3 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=93480cc0794aafba54bfe6d1c4eac70645d77875fd1ae9538f9205773a317936 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:17.542 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Krd 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Krd 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Krd 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=03a2a5148bd2e5f916d579e19ef94e977593890322e4d763 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.QE5 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 03a2a5148bd2e5f916d579e19ef94e977593890322e4d763 0 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 03a2a5148bd2e5f916d579e19ef94e977593890322e4d763 0 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=03a2a5148bd2e5f916d579e19ef94e977593890322e4d763 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.QE5 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.QE5 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.QE5 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0ecac10117c3b8b89847020baf8bcf4a9f71978f71e3a7f7 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.PTF 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0ecac10117c3b8b89847020baf8bcf4a9f71978f71e3a7f7 2 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0ecac10117c3b8b89847020baf8bcf4a9f71978f71e3a7f7 2 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0ecac10117c3b8b89847020baf8bcf4a9f71978f71e3a7f7 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.PTF 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.PTF 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.PTF 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bcbad2e875ad60d606400223efd5a4b7 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.OdL 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bcbad2e875ad60d606400223efd5a4b7 1 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bcbad2e875ad60d606400223efd5a4b7 1 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bcbad2e875ad60d606400223efd5a4b7 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.OdL 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.OdL 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.OdL 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9d144c7a41fdfec8a3a63982decb6292 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.TeY 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9d144c7a41fdfec8a3a63982decb6292 1 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9d144c7a41fdfec8a3a63982decb6292 1 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9d144c7a41fdfec8a3a63982decb6292 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:17.802 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.TeY 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.TeY 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.TeY 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f80b1f03a757f5b46dcdc3210930d83b1706e00b936e4125 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.z5e 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f80b1f03a757f5b46dcdc3210930d83b1706e00b936e4125 2 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f80b1f03a757f5b46dcdc3210930d83b1706e00b936e4125 2 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f80b1f03a757f5b46dcdc3210930d83b1706e00b936e4125 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.z5e 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.z5e 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.z5e 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=78346b0586de58ed9d0b23434026e856 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Gu6 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 78346b0586de58ed9d0b23434026e856 0 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 78346b0586de58ed9d0b23434026e856 0 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=78346b0586de58ed9d0b23434026e856 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Gu6 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Gu6 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Gu6 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cef99e93994bfc204d9311c8fe9fef441cb48d77a7384376230b7bc5c4843503 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.VXw 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cef99e93994bfc204d9311c8fe9fef441cb48d77a7384376230b7bc5c4843503 3 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cef99e93994bfc204d9311c8fe9fef441cb48d77a7384376230b7bc5c4843503 3 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cef99e93994bfc204d9311c8fe9fef441cb48d77a7384376230b7bc5c4843503 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.VXw 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.VXw 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.VXw 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2441856 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2441856 ']' 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:18.063 16:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.324 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:18.324 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:18.324 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:18.324 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Yy0 00:27:18.324 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.324 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.324 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.324 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Krd ]] 00:27:18.324 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Krd 00:27:18.324 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.324 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.324 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.QE5 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.PTF ]] 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PTF 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.OdL 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.TeY ]] 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TeY 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.z5e 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Gu6 ]] 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Gu6 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.VXw 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:18.325 16:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:21.625 Waiting for block devices as requested 00:27:21.625 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:21.625 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:21.924 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:21.924 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:21.924 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:21.924 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:22.184 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:22.184 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:22.184 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:22.445 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:22.445 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:22.705 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:22.705 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:22.705 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:22.705 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:22.964 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:22.964 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:23.914 No valid GPT data, bailing 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:23.914 00:27:23.914 Discovery Log Number of Records 2, Generation counter 2 00:27:23.914 =====Discovery Log Entry 0====== 00:27:23.914 trtype: tcp 00:27:23.914 adrfam: ipv4 00:27:23.914 subtype: current discovery subsystem 00:27:23.914 treq: not specified, sq flow control disable supported 00:27:23.914 portid: 1 00:27:23.914 trsvcid: 4420 00:27:23.914 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:23.914 traddr: 10.0.0.1 00:27:23.914 eflags: none 00:27:23.914 sectype: none 00:27:23.914 =====Discovery Log Entry 1====== 00:27:23.914 trtype: tcp 00:27:23.914 adrfam: ipv4 00:27:23.914 subtype: nvme subsystem 00:27:23.914 treq: not specified, sq flow control disable supported 00:27:23.914 portid: 1 00:27:23.914 trsvcid: 4420 00:27:23.914 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:23.914 traddr: 10.0.0.1 00:27:23.914 eflags: none 00:27:23.914 sectype: none 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: ]] 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:23.914 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.915 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.216 nvme0n1 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: ]] 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.216 16:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.477 nvme0n1 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: ]] 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.477 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.738 nvme0n1 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: ]] 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.738 nvme0n1 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.738 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.998 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.998 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.998 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.998 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: ]] 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.999 nvme0n1 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.999 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.259 16:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.259 nvme0n1 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: ]] 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.259 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.520 nvme0n1 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: ]] 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.520 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.786 nvme0n1 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: ]] 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.786 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.047 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.047 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.047 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.047 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.047 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.047 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.047 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.047 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.047 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.047 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.048 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.048 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.048 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.048 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.048 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.048 nvme0n1 00:27:26.048 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.048 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.048 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.048 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.048 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.048 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.048 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.048 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.048 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.048 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: ]] 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.309 16:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.309 nvme0n1 00:27:26.309 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.309 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.309 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.309 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.309 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.309 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.309 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.309 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.309 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.309 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.570 nvme0n1 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.570 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: ]] 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.830 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.831 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.831 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.831 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.091 nvme0n1 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: ]] 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.091 16:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.351 nvme0n1 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: ]] 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.352 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.611 nvme0n1 00:27:27.611 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.611 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.611 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.611 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.611 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.611 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.871 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: ]] 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.872 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.133 nvme0n1 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.133 16:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.393 nvme0n1 00:27:28.393 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.393 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.393 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.393 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.393 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.393 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.393 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.393 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.393 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.393 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.393 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.393 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.393 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: ]] 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.394 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.654 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.654 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.654 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.654 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.654 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.654 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.654 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.654 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.654 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.654 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.654 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.654 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.654 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.654 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.654 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.915 nvme0n1 00:27:28.915 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.915 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.915 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.915 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.915 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.915 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.915 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.915 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.915 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.915 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.175 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: ]] 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.176 16:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 nvme0n1 00:27:29.437 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.437 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.437 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.437 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.437 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: ]] 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.698 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.699 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.699 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.699 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.699 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.699 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.699 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.699 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.699 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.699 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.699 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.699 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.271 nvme0n1 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: ]] 00:27:30.271 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.272 16:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.533 nvme0n1 00:27:30.533 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.533 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.533 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.533 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.533 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.533 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.794 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.369 nvme0n1 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: ]] 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.369 16:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.941 nvme0n1 00:27:31.941 16:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.941 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.941 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.941 16:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.941 16:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.941 16:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.941 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.203 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.203 16:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: ]] 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.204 16:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.775 nvme0n1 00:27:32.775 16:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.775 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.775 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.775 16:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.775 16:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.775 16:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: ]] 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.037 16:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.608 nvme0n1 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: ]] 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.608 16:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.609 16:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.609 16:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.609 16:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.609 16:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.609 16:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.551 nvme0n1 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.551 16:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.552 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.552 16:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.552 16:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.552 16:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.552 16:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.552 16:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.552 16:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.552 16:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.552 16:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.552 16:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.552 16:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.552 16:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.552 16:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.552 16:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.494 nvme0n1 00:27:35.494 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.494 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.494 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.494 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.494 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.494 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.494 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.494 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: ]] 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.495 nvme0n1 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.495 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.756 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.756 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.756 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:35.756 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.756 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: ]] 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.757 nvme0n1 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: ]] 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.757 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.018 nvme0n1 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.018 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: ]] 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.019 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.280 nvme0n1 00:27:36.280 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.280 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.280 16:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.280 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.280 16:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.280 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.542 nvme0n1 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: ]] 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.542 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.804 nvme0n1 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: ]] 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.804 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.066 nvme0n1 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: ]] 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.066 16:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.326 nvme0n1 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: ]] 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.326 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.586 nvme0n1 00:27:37.586 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.586 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.586 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.586 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.586 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.586 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.586 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.586 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.586 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.586 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.586 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.586 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.586 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:37.586 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.586 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.586 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.587 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.847 nvme0n1 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: ]] 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.848 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.108 nvme0n1 00:27:38.108 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.108 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.108 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.108 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.108 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.368 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.368 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.368 16:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.368 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.368 16:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: ]] 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.368 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.369 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.369 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.369 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.369 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.369 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.369 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.369 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.369 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.369 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.369 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.629 nvme0n1 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: ]] 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.630 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.890 nvme0n1 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: ]] 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:38.890 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:38.891 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.891 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.891 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.891 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:38.891 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.891 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:38.891 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.891 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.151 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.151 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.151 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.151 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.151 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.151 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.151 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.151 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.151 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.151 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.151 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.151 16:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.151 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.151 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.151 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.151 nvme0n1 00:27:39.151 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.411 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.411 16:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.411 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.411 16:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.411 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.671 nvme0n1 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: ]] 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:39.671 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.672 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.241 nvme0n1 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:40.241 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: ]] 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.242 16:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.836 nvme0n1 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: ]] 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.836 16:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.837 16:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.405 nvme0n1 00:27:41.405 16:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.405 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.406 16:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.406 16:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.406 16:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.406 16:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: ]] 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.406 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.979 nvme0n1 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.979 16:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.241 nvme0n1 00:27:42.241 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.241 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.241 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.241 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.241 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.241 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: ]] 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.502 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.073 nvme0n1 00:27:43.073 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.073 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.073 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.073 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.073 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: ]] 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.334 16:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.907 nvme0n1 00:27:43.907 16:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.907 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.907 16:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.907 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.907 16:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.167 16:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.167 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.167 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.167 16:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.167 16:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.167 16:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.167 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.167 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:44.167 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.167 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.167 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:44.167 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:44.167 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:44.167 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:44.167 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.167 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:44.167 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: ]] 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.168 16:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.738 nvme0n1 00:27:44.738 16:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: ]] 00:27:44.998 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.999 16:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.570 nvme0n1 00:27:45.570 16:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.570 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.570 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.570 16:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.570 16:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.570 16:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.830 16:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.402 nvme0n1 00:27:46.402 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.402 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.402 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.402 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.402 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.402 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.663 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.663 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.663 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.663 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.663 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.663 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:46.663 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.663 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.663 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:46.663 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.663 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.663 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.663 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.663 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:46.663 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: ]] 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.664 nvme0n1 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.664 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: ]] 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.925 nvme0n1 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.925 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: ]] 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.926 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.187 nvme0n1 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: ]] 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.187 16:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.187 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.187 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.187 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.187 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.187 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.187 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.187 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.187 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.187 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.187 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.187 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.187 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.187 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.187 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.449 nvme0n1 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.449 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.709 nvme0n1 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: ]] 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.709 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.970 nvme0n1 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: ]] 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.970 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.232 nvme0n1 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: ]] 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.232 16:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.232 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.232 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.232 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.232 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.232 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.232 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.232 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.232 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.232 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.232 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.232 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.232 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.232 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.232 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.232 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.492 nvme0n1 00:27:48.492 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.492 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.492 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.492 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.492 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.492 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.492 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.492 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.492 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.492 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.492 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.492 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.492 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:48.492 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.492 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.492 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: ]] 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.493 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.753 nvme0n1 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.753 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.014 nvme0n1 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: ]] 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.014 16:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.586 nvme0n1 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.586 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: ]] 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.587 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.847 nvme0n1 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:49.847 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: ]] 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.848 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.109 nvme0n1 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: ]] 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.109 16:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.679 nvme0n1 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:50.679 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.680 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.940 nvme0n1 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: ]] 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.940 16:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.512 nvme0n1 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: ]] 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.512 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.084 nvme0n1 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: ]] 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.084 16:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.657 nvme0n1 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: ]] 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.657 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.229 nvme0n1 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.229 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.230 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.230 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.230 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.230 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.230 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.230 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.230 16:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.230 16:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.230 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.230 16:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.490 nvme0n1 00:27:53.490 16:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.490 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.490 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.490 16:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.490 16:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.490 16:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzY3MjhiMGYxMGFhZjZjY2RhOGY2YzQyMWZiMzY0MzKCiVxi: 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: ]] 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTM0ODBjYzA3OTRhYWZiYTU0YmZlNmQxYzRlYWM3MDY0NWQ3Nzg3NWZkMWFlOTUzOGY5MjA1NzczYTMxNzkzNpsdIiQ=: 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.751 16:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.323 nvme0n1 00:27:54.323 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.323 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.323 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.323 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.323 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.323 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: ]] 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.584 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.154 nvme0n1 00:27:55.154 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.154 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.154 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.154 16:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.154 16:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.415 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.415 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.415 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmNiYWQyZTg3NWFkNjBkNjA2NDAwMjIzZWZkNWE0Yjc0ENZS: 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: ]] 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQxNDRjN2E0MWZkZmVjOGEzYTYzOTgyZGVjYjYyOTJMGr0e: 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.416 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.988 nvme0n1 00:27:55.988 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.988 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.988 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.988 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.988 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjgwYjFmMDNhNzU3ZjViNDZkY2RjMzIxMDkzMGQ4M2IxNzA2ZTAwYjkzNmU0MTI1Tx9Uwg==: 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: ]] 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzgzNDZiMDU4NmRlNThlZDlkMGIyMzQzNDAyNmU4NTbZmCqK: 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.249 16:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.190 nvme0n1 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2VmOTllOTM5OTRiZmMyMDRkOTMxMWM4ZmU5ZmVmNDQxY2I0OGQ3N2E3Mzg0Mzc2MjMwYjdiYzVjNDg0MzUwM7wyRVI=: 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.190 16:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.799 nvme0n1 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNhMmE1MTQ4YmQyZTVmOTE2ZDU3OWUxOWVmOTRlOTc3NTkzODkwMzIyZTRkNzYzHwyBBw==: 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: ]] 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWMxMDExN2MzYjhiODk4NDcwMjBiYWY4YmNmNGE5ZjcxOTc4ZjcxZTNhN2Y3CvXjLg==: 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.799 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.066 request: 00:27:58.066 { 00:27:58.066 "name": "nvme0", 00:27:58.066 "trtype": "tcp", 00:27:58.066 "traddr": "10.0.0.1", 00:27:58.066 "adrfam": "ipv4", 00:27:58.066 "trsvcid": "4420", 00:27:58.066 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:58.066 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:58.066 "prchk_reftag": false, 00:27:58.066 "prchk_guard": false, 00:27:58.066 "hdgst": false, 00:27:58.066 "ddgst": false, 00:27:58.066 "method": "bdev_nvme_attach_controller", 00:27:58.066 "req_id": 1 00:27:58.066 } 00:27:58.066 Got JSON-RPC error response 00:27:58.066 response: 00:27:58.066 { 00:27:58.066 "code": -5, 00:27:58.066 "message": "Input/output error" 00:27:58.066 } 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.066 request: 00:27:58.066 { 00:27:58.066 "name": "nvme0", 00:27:58.066 "trtype": "tcp", 00:27:58.066 "traddr": "10.0.0.1", 00:27:58.066 "adrfam": "ipv4", 00:27:58.066 "trsvcid": "4420", 00:27:58.066 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:58.066 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:58.066 "prchk_reftag": false, 00:27:58.066 "prchk_guard": false, 00:27:58.066 "hdgst": false, 00:27:58.066 "ddgst": false, 00:27:58.066 "dhchap_key": "key2", 00:27:58.066 "method": "bdev_nvme_attach_controller", 00:27:58.066 "req_id": 1 00:27:58.066 } 00:27:58.066 Got JSON-RPC error response 00:27:58.066 response: 00:27:58.066 { 00:27:58.066 "code": -5, 00:27:58.066 "message": "Input/output error" 00:27:58.066 } 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.066 request: 00:27:58.066 { 00:27:58.066 "name": "nvme0", 00:27:58.066 "trtype": "tcp", 00:27:58.066 "traddr": "10.0.0.1", 00:27:58.066 "adrfam": "ipv4", 00:27:58.066 "trsvcid": "4420", 00:27:58.066 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:58.066 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:58.066 "prchk_reftag": false, 00:27:58.066 "prchk_guard": false, 00:27:58.066 "hdgst": false, 00:27:58.066 "ddgst": false, 00:27:58.066 "dhchap_key": "key1", 00:27:58.066 "dhchap_ctrlr_key": "ckey2", 00:27:58.066 "method": "bdev_nvme_attach_controller", 00:27:58.066 "req_id": 1 00:27:58.066 } 00:27:58.066 Got JSON-RPC error response 00:27:58.066 response: 00:27:58.066 { 00:27:58.066 "code": -5, 00:27:58.066 "message": "Input/output error" 00:27:58.066 } 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:58.066 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:58.367 rmmod nvme_tcp 00:27:58.367 rmmod nvme_fabrics 00:27:58.367 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:58.367 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:58.367 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:58.367 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2441856 ']' 00:27:58.367 16:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2441856 00:27:58.367 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2441856 ']' 00:27:58.367 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2441856 00:27:58.367 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:27:58.367 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:58.367 16:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2441856 00:27:58.367 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:58.367 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:58.367 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2441856' 00:27:58.367 killing process with pid 2441856 00:27:58.367 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2441856 00:27:58.367 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2441856 00:27:58.367 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:58.367 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:58.367 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:58.367 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:58.367 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:58.367 16:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.367 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.367 16:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.909 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:00.909 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:00.909 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:00.909 16:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:00.909 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:00.909 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:00.909 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:00.909 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:00.909 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:00.909 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:00.909 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:00.909 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:00.909 16:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:04.230 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:04.230 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:04.230 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:04.230 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:04.230 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:04.230 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:04.230 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:04.230 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:04.230 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:04.230 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:04.230 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:04.230 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:04.230 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:04.230 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:04.230 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:04.230 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:04.230 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:04.491 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Yy0 /tmp/spdk.key-null.QE5 /tmp/spdk.key-sha256.OdL /tmp/spdk.key-sha384.z5e /tmp/spdk.key-sha512.VXw /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:04.491 16:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:07.795 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:07.795 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:07.795 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:07.795 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:07.795 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:07.795 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:07.795 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:07.795 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:07.795 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:07.795 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:07.795 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:07.795 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:07.795 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:07.795 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:07.795 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:07.795 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:07.795 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:08.055 00:28:08.055 real 0m58.673s 00:28:08.055 user 0m52.452s 00:28:08.055 sys 0m15.045s 00:28:08.055 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:08.055 16:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.055 ************************************ 00:28:08.055 END TEST nvmf_auth_host 00:28:08.055 ************************************ 00:28:08.055 16:19:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:08.055 16:19:43 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:08.055 16:19:43 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:08.055 16:19:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:08.056 16:19:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:08.056 16:19:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:08.056 ************************************ 00:28:08.056 START TEST nvmf_digest 00:28:08.056 ************************************ 00:28:08.056 16:19:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:08.316 * Looking for test storage... 00:28:08.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:08.316 16:19:43 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.316 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:08.316 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.316 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.316 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.316 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.316 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.316 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:08.317 16:19:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:16.459 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.459 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:16.460 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:16.460 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:16.460 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:16.460 16:19:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:16.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:16.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:28:16.460 00:28:16.460 --- 10.0.0.2 ping statistics --- 00:28:16.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.460 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:16.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:16.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:28:16.460 00:28:16.460 --- 10.0.0.1 ping statistics --- 00:28:16.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.460 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:16.460 ************************************ 00:28:16.460 START TEST nvmf_digest_clean 00:28:16.460 ************************************ 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2458503 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2458503 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2458503 ']' 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.460 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:16.461 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:16.461 16:19:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:16.461 [2024-07-15 16:19:51.313127] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:28:16.461 [2024-07-15 16:19:51.313177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:16.461 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.461 [2024-07-15 16:19:51.378654] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.461 [2024-07-15 16:19:51.444760] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.461 [2024-07-15 16:19:51.444792] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.461 [2024-07-15 16:19:51.444800] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:16.461 [2024-07-15 16:19:51.444811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:16.461 [2024-07-15 16:19:51.444817] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.461 [2024-07-15 16:19:51.444841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:16.461 null0 00:28:16.461 [2024-07-15 16:19:52.179877] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.461 [2024-07-15 16:19:52.204062] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2458849 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2458849 /var/tmp/bperf.sock 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2458849 ']' 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:16.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:16.461 16:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:16.461 [2024-07-15 16:19:52.256301] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:28:16.461 [2024-07-15 16:19:52.256348] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2458849 ] 00:28:16.461 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.722 [2024-07-15 16:19:52.331696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.722 [2024-07-15 16:19:52.395929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.293 16:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:17.293 16:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:17.293 16:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:17.293 16:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:17.293 16:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:17.552 16:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.552 16:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.812 nvme0n1 00:28:18.072 16:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:18.072 16:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:18.072 Running I/O for 2 seconds... 00:28:19.987 00:28:19.987 Latency(us) 00:28:19.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.987 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:19.987 nvme0n1 : 2.01 19074.82 74.51 0.00 0.00 6703.33 3481.60 18022.40 00:28:19.987 =================================================================================================================== 00:28:19.987 Total : 19074.82 74.51 0.00 0.00 6703.33 3481.60 18022.40 00:28:19.987 0 00:28:19.987 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:19.988 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:19.988 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:19.988 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:19.988 | select(.opcode=="crc32c") 00:28:19.988 | "\(.module_name) \(.executed)"' 00:28:19.988 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:20.248 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:20.248 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:20.248 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:20.248 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:20.248 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2458849 00:28:20.248 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2458849 ']' 00:28:20.248 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2458849 00:28:20.248 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:20.248 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:20.248 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2458849 00:28:20.248 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:20.248 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:20.248 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2458849' 00:28:20.248 killing process with pid 2458849 00:28:20.248 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2458849 00:28:20.248 Received shutdown signal, test time was about 2.000000 seconds 00:28:20.248 00:28:20.248 Latency(us) 00:28:20.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.248 =================================================================================================================== 00:28:20.248 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.248 16:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2458849 00:28:20.509 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:20.509 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:20.509 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:20.509 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:20.509 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:20.509 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:20.509 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:20.509 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2459534 00:28:20.509 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2459534 /var/tmp/bperf.sock 00:28:20.509 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2459534 ']' 00:28:20.509 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:20.509 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:20.509 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:20.509 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:20.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:20.509 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:20.509 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:20.509 [2024-07-15 16:19:56.159890] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:28:20.509 [2024-07-15 16:19:56.159963] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2459534 ] 00:28:20.509 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:20.509 Zero copy mechanism will not be used. 00:28:20.509 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.509 [2024-07-15 16:19:56.234450] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.509 [2024-07-15 16:19:56.287272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.082 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:21.082 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:21.082 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:21.082 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:21.082 16:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:21.341 16:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.341 16:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.600 nvme0n1 00:28:21.859 16:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:21.859 16:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:21.859 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:21.859 Zero copy mechanism will not be used. 00:28:21.859 Running I/O for 2 seconds... 00:28:23.768 00:28:23.768 Latency(us) 00:28:23.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.768 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:23.768 nvme0n1 : 2.01 2344.42 293.05 0.00 0.00 6820.66 1126.40 14308.69 00:28:23.768 =================================================================================================================== 00:28:23.768 Total : 2344.42 293.05 0.00 0.00 6820.66 1126.40 14308.69 00:28:23.768 0 00:28:23.768 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:23.768 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:23.768 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:23.768 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:23.768 | select(.opcode=="crc32c") 00:28:23.768 | "\(.module_name) \(.executed)"' 00:28:23.768 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:24.028 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:24.028 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:24.028 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:24.028 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:24.028 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2459534 00:28:24.028 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2459534 ']' 00:28:24.028 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2459534 00:28:24.028 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:24.028 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:24.028 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2459534 00:28:24.028 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:24.028 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:24.028 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2459534' 00:28:24.028 killing process with pid 2459534 00:28:24.028 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2459534 00:28:24.028 Received shutdown signal, test time was about 2.000000 seconds 00:28:24.028 00:28:24.028 Latency(us) 00:28:24.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.028 =================================================================================================================== 00:28:24.028 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.028 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2459534 00:28:24.288 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:24.288 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:24.288 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:24.288 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:24.288 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:24.288 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:24.288 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:24.288 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2460215 00:28:24.288 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2460215 /var/tmp/bperf.sock 00:28:24.288 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2460215 ']' 00:28:24.288 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:24.288 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:24.288 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:24.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:24.288 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:24.289 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.289 16:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:24.289 [2024-07-15 16:19:59.924225] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:28:24.289 [2024-07-15 16:19:59.924279] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2460215 ] 00:28:24.289 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.289 [2024-07-15 16:19:59.998957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.289 [2024-07-15 16:20:00.055751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.890 16:20:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:24.890 16:20:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:24.890 16:20:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:24.890 16:20:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:24.890 16:20:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:25.150 16:20:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.150 16:20:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.410 nvme0n1 00:28:25.410 16:20:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:25.410 16:20:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:25.669 Running I/O for 2 seconds... 00:28:27.578 00:28:27.578 Latency(us) 00:28:27.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.578 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:27.578 nvme0n1 : 2.00 21915.71 85.61 0.00 0.00 5832.88 2239.15 13598.72 00:28:27.578 =================================================================================================================== 00:28:27.578 Total : 21915.71 85.61 0.00 0.00 5832.88 2239.15 13598.72 00:28:27.578 0 00:28:27.578 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:27.578 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:27.578 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:27.578 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:27.578 | select(.opcode=="crc32c") 00:28:27.578 | "\(.module_name) \(.executed)"' 00:28:27.578 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2460215 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2460215 ']' 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2460215 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2460215 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2460215' 00:28:27.839 killing process with pid 2460215 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2460215 00:28:27.839 Received shutdown signal, test time was about 2.000000 seconds 00:28:27.839 00:28:27.839 Latency(us) 00:28:27.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.839 =================================================================================================================== 00:28:27.839 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2460215 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2460904 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2460904 /var/tmp/bperf.sock 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2460904 ']' 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:27.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:27.839 16:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:28.099 [2024-07-15 16:20:03.709109] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:28:28.099 [2024-07-15 16:20:03.709169] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2460904 ] 00:28:28.099 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:28.099 Zero copy mechanism will not be used. 00:28:28.099 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.099 [2024-07-15 16:20:03.782420] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.099 [2024-07-15 16:20:03.834479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.670 16:20:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:28.670 16:20:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:28.670 16:20:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:28.670 16:20:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:28.670 16:20:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:28.930 16:20:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.930 16:20:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.190 nvme0n1 00:28:29.190 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:29.190 16:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:29.450 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:29.450 Zero copy mechanism will not be used. 00:28:29.450 Running I/O for 2 seconds... 00:28:31.362 00:28:31.362 Latency(us) 00:28:31.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.362 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:31.362 nvme0n1 : 2.01 3054.71 381.84 0.00 0.00 5227.52 3263.15 21189.97 00:28:31.362 =================================================================================================================== 00:28:31.362 Total : 3054.71 381.84 0.00 0.00 5227.52 3263.15 21189.97 00:28:31.362 0 00:28:31.362 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:31.362 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:31.362 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:31.362 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:31.362 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:31.362 | select(.opcode=="crc32c") 00:28:31.362 | "\(.module_name) \(.executed)"' 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2460904 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2460904 ']' 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2460904 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2460904 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2460904' 00:28:31.622 killing process with pid 2460904 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2460904 00:28:31.622 Received shutdown signal, test time was about 2.000000 seconds 00:28:31.622 00:28:31.622 Latency(us) 00:28:31.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.622 =================================================================================================================== 00:28:31.622 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2460904 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2458503 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2458503 ']' 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2458503 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:31.622 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2458503 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2458503' 00:28:31.883 killing process with pid 2458503 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2458503 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2458503 00:28:31.883 00:28:31.883 real 0m16.363s 00:28:31.883 user 0m32.168s 00:28:31.883 sys 0m3.108s 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:31.883 ************************************ 00:28:31.883 END TEST nvmf_digest_clean 00:28:31.883 ************************************ 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:31.883 ************************************ 00:28:31.883 START TEST nvmf_digest_error 00:28:31.883 ************************************ 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2461835 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2461835 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2461835 ']' 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:31.883 16:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.144 [2024-07-15 16:20:07.758536] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:28:32.144 [2024-07-15 16:20:07.758594] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.144 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.144 [2024-07-15 16:20:07.830107] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.144 [2024-07-15 16:20:07.903917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.144 [2024-07-15 16:20:07.903959] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.144 [2024-07-15 16:20:07.903966] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.144 [2024-07-15 16:20:07.903973] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.144 [2024-07-15 16:20:07.903978] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.144 [2024-07-15 16:20:07.904005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.715 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:32.715 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:32.715 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:32.715 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:32.715 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.977 [2024-07-15 16:20:08.589992] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.977 null0 00:28:32.977 [2024-07-15 16:20:08.670495] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.977 [2024-07-15 16:20:08.694706] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2461962 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2461962 /var/tmp/bperf.sock 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2461962 ']' 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:32.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.977 16:20:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:32.977 [2024-07-15 16:20:08.746092] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:28:32.977 [2024-07-15 16:20:08.746145] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2461962 ] 00:28:32.977 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.239 [2024-07-15 16:20:08.818558] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.239 [2024-07-15 16:20:08.872037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.808 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:33.808 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:33.808 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:33.808 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:34.067 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:34.067 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.067 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.067 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.067 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.067 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.328 nvme0n1 00:28:34.328 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:34.328 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.328 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.328 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.328 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:34.328 16:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:34.328 Running I/O for 2 seconds... 00:28:34.328 [2024-07-15 16:20:10.082393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.328 [2024-07-15 16:20:10.082423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.328 [2024-07-15 16:20:10.082432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.328 [2024-07-15 16:20:10.096888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.328 [2024-07-15 16:20:10.096910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.328 [2024-07-15 16:20:10.096918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.328 [2024-07-15 16:20:10.108999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.328 [2024-07-15 16:20:10.109017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.328 [2024-07-15 16:20:10.109023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.328 [2024-07-15 16:20:10.121978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.328 [2024-07-15 16:20:10.121996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.328 [2024-07-15 16:20:10.122002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.328 [2024-07-15 16:20:10.133639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.328 [2024-07-15 16:20:10.133657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.328 [2024-07-15 16:20:10.133669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.328 [2024-07-15 16:20:10.145878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.328 [2024-07-15 16:20:10.145897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.328 [2024-07-15 16:20:10.145904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.328 [2024-07-15 16:20:10.158208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.328 [2024-07-15 16:20:10.158225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.328 [2024-07-15 16:20:10.158231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.589 [2024-07-15 16:20:10.171111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.589 [2024-07-15 16:20:10.171132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.589 [2024-07-15 16:20:10.171139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.589 [2024-07-15 16:20:10.183438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.589 [2024-07-15 16:20:10.183454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.589 [2024-07-15 16:20:10.183461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.589 [2024-07-15 16:20:10.195940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.589 [2024-07-15 16:20:10.195956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.589 [2024-07-15 16:20:10.195962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.207410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.207427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.207434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.219970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.219987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.219993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.231874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.231891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.231897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.244154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.244170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.244177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.256739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.256756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.256762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.269328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.269344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.269351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.282374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.282390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.282396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.294832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.294848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.294854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.306266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.306282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.306288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.317778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.317795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.317802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.330244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.330261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.330267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.342065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.342082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.342092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.354399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.354417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.354423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.366447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.366464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.366470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.380251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.380268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.380275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.392535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.392552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.392558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.403890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.403907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.403913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.417084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.417100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.417107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.590 [2024-07-15 16:20:10.429133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.590 [2024-07-15 16:20:10.429150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.590 [2024-07-15 16:20:10.429157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.851 [2024-07-15 16:20:10.441071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.851 [2024-07-15 16:20:10.441087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.851 [2024-07-15 16:20:10.441094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.851 [2024-07-15 16:20:10.453473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.851 [2024-07-15 16:20:10.453492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.851 [2024-07-15 16:20:10.453498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.851 [2024-07-15 16:20:10.465799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.851 [2024-07-15 16:20:10.465816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.851 [2024-07-15 16:20:10.465822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.851 [2024-07-15 16:20:10.477762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.851 [2024-07-15 16:20:10.477779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.851 [2024-07-15 16:20:10.477785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.851 [2024-07-15 16:20:10.489821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.851 [2024-07-15 16:20:10.489837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.851 [2024-07-15 16:20:10.489843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.851 [2024-07-15 16:20:10.501890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.851 [2024-07-15 16:20:10.501906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.851 [2024-07-15 16:20:10.501912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.851 [2024-07-15 16:20:10.513120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.851 [2024-07-15 16:20:10.513140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.851 [2024-07-15 16:20:10.513146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.851 [2024-07-15 16:20:10.525992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.851 [2024-07-15 16:20:10.526009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.851 [2024-07-15 16:20:10.526015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.851 [2024-07-15 16:20:10.540334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.851 [2024-07-15 16:20:10.540350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.851 [2024-07-15 16:20:10.540356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.851 [2024-07-15 16:20:10.552225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.851 [2024-07-15 16:20:10.552242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.852 [2024-07-15 16:20:10.552248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.852 [2024-07-15 16:20:10.565146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.852 [2024-07-15 16:20:10.565163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.852 [2024-07-15 16:20:10.565170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.852 [2024-07-15 16:20:10.576525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.852 [2024-07-15 16:20:10.576541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.852 [2024-07-15 16:20:10.576547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.852 [2024-07-15 16:20:10.588869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.852 [2024-07-15 16:20:10.588886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.852 [2024-07-15 16:20:10.588892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.852 [2024-07-15 16:20:10.600917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.852 [2024-07-15 16:20:10.600933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.852 [2024-07-15 16:20:10.600939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.852 [2024-07-15 16:20:10.613838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.852 [2024-07-15 16:20:10.613855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.852 [2024-07-15 16:20:10.613861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.852 [2024-07-15 16:20:10.625619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.852 [2024-07-15 16:20:10.625635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.852 [2024-07-15 16:20:10.625641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.852 [2024-07-15 16:20:10.636899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.852 [2024-07-15 16:20:10.636916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.852 [2024-07-15 16:20:10.636922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.852 [2024-07-15 16:20:10.649690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.852 [2024-07-15 16:20:10.649706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.852 [2024-07-15 16:20:10.649712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.852 [2024-07-15 16:20:10.662351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.852 [2024-07-15 16:20:10.662368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.852 [2024-07-15 16:20:10.662377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.852 [2024-07-15 16:20:10.674201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.852 [2024-07-15 16:20:10.674217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.852 [2024-07-15 16:20:10.674223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.852 [2024-07-15 16:20:10.686219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:34.852 [2024-07-15 16:20:10.686236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.852 [2024-07-15 16:20:10.686242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.698519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.698536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.698542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.710554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.710571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.710577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.723033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.723050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.723057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.736222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.736239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.736245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.749465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.749482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.749488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.760441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.760458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.760465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.774423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.774444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.774450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.786758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.786775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.786781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.797577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.797593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.797599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.811241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.811257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.811263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.823780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.823797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.823803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.834529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.834546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.834552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.846815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.846832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.846838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.858954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.858971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.858977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.873604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.873620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.873626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.884148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.884165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.884171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.896820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.896836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.896842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.908359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.908376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.908382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.922205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.922221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.922227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.934178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.934194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.934200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.113 [2024-07-15 16:20:10.945981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.113 [2024-07-15 16:20:10.945998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.113 [2024-07-15 16:20:10.946004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.373 [2024-07-15 16:20:10.958232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.373 [2024-07-15 16:20:10.958250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.373 [2024-07-15 16:20:10.958256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.373 [2024-07-15 16:20:10.969375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.373 [2024-07-15 16:20:10.969392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.373 [2024-07-15 16:20:10.969399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.373 [2024-07-15 16:20:10.983029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.373 [2024-07-15 16:20:10.983049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.373 [2024-07-15 16:20:10.983055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.373 [2024-07-15 16:20:10.994289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.373 [2024-07-15 16:20:10.994305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.373 [2024-07-15 16:20:10.994311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.373 [2024-07-15 16:20:11.006437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.373 [2024-07-15 16:20:11.006453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.373 [2024-07-15 16:20:11.006459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.373 [2024-07-15 16:20:11.018582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.373 [2024-07-15 16:20:11.018598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.373 [2024-07-15 16:20:11.018604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.373 [2024-07-15 16:20:11.030820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.373 [2024-07-15 16:20:11.030837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.373 [2024-07-15 16:20:11.030843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.373 [2024-07-15 16:20:11.043657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.373 [2024-07-15 16:20:11.043674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.373 [2024-07-15 16:20:11.043680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.373 [2024-07-15 16:20:11.055757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.373 [2024-07-15 16:20:11.055774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.373 [2024-07-15 16:20:11.055780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.373 [2024-07-15 16:20:11.068239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.374 [2024-07-15 16:20:11.068256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.374 [2024-07-15 16:20:11.068262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.374 [2024-07-15 16:20:11.081062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.374 [2024-07-15 16:20:11.081079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.374 [2024-07-15 16:20:11.081085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.374 [2024-07-15 16:20:11.092646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.374 [2024-07-15 16:20:11.092663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.374 [2024-07-15 16:20:11.092669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.374 [2024-07-15 16:20:11.105504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.374 [2024-07-15 16:20:11.105520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.374 [2024-07-15 16:20:11.105526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.374 [2024-07-15 16:20:11.117574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.374 [2024-07-15 16:20:11.117591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.374 [2024-07-15 16:20:11.117597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.374 [2024-07-15 16:20:11.129692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.374 [2024-07-15 16:20:11.129708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.374 [2024-07-15 16:20:11.129715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.374 [2024-07-15 16:20:11.142199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.374 [2024-07-15 16:20:11.142216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.374 [2024-07-15 16:20:11.142222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.374 [2024-07-15 16:20:11.155118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.374 [2024-07-15 16:20:11.155139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.374 [2024-07-15 16:20:11.155146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.374 [2024-07-15 16:20:11.166225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.374 [2024-07-15 16:20:11.166242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.374 [2024-07-15 16:20:11.166248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.374 [2024-07-15 16:20:11.178509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.374 [2024-07-15 16:20:11.178526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.374 [2024-07-15 16:20:11.178532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.374 [2024-07-15 16:20:11.191414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.374 [2024-07-15 16:20:11.191431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.374 [2024-07-15 16:20:11.191440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.374 [2024-07-15 16:20:11.203257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.374 [2024-07-15 16:20:11.203273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.374 [2024-07-15 16:20:11.203279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.634 [2024-07-15 16:20:11.215195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.634 [2024-07-15 16:20:11.215212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.634 [2024-07-15 16:20:11.215219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.634 [2024-07-15 16:20:11.227331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.634 [2024-07-15 16:20:11.227348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.634 [2024-07-15 16:20:11.227355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.634 [2024-07-15 16:20:11.240383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.634 [2024-07-15 16:20:11.240400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.634 [2024-07-15 16:20:11.240406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.634 [2024-07-15 16:20:11.253048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.634 [2024-07-15 16:20:11.253064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.634 [2024-07-15 16:20:11.253070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.634 [2024-07-15 16:20:11.265203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.634 [2024-07-15 16:20:11.265220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.634 [2024-07-15 16:20:11.265226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.634 [2024-07-15 16:20:11.276689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.634 [2024-07-15 16:20:11.276706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.634 [2024-07-15 16:20:11.276712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.634 [2024-07-15 16:20:11.288687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.634 [2024-07-15 16:20:11.288705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.634 [2024-07-15 16:20:11.288711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.634 [2024-07-15 16:20:11.301269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.634 [2024-07-15 16:20:11.301293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.634 [2024-07-15 16:20:11.301299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.634 [2024-07-15 16:20:11.314118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.634 [2024-07-15 16:20:11.314140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.634 [2024-07-15 16:20:11.314146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.634 [2024-07-15 16:20:11.326000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.634 [2024-07-15 16:20:11.326016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.634 [2024-07-15 16:20:11.326023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.634 [2024-07-15 16:20:11.338217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.634 [2024-07-15 16:20:11.338234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.634 [2024-07-15 16:20:11.338241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.634 [2024-07-15 16:20:11.350973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.634 [2024-07-15 16:20:11.350989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.634 [2024-07-15 16:20:11.350995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.634 [2024-07-15 16:20:11.362875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.634 [2024-07-15 16:20:11.362892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.634 [2024-07-15 16:20:11.362899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.634 [2024-07-15 16:20:11.374975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.634 [2024-07-15 16:20:11.374991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.634 [2024-07-15 16:20:11.374997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.634 [2024-07-15 16:20:11.387019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.635 [2024-07-15 16:20:11.387035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.635 [2024-07-15 16:20:11.387041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.635 [2024-07-15 16:20:11.399264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.635 [2024-07-15 16:20:11.399281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.635 [2024-07-15 16:20:11.399288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.635 [2024-07-15 16:20:11.412347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.635 [2024-07-15 16:20:11.412364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.635 [2024-07-15 16:20:11.412370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.635 [2024-07-15 16:20:11.424199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.635 [2024-07-15 16:20:11.424216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.635 [2024-07-15 16:20:11.424223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.635 [2024-07-15 16:20:11.436075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.635 [2024-07-15 16:20:11.436092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.635 [2024-07-15 16:20:11.436098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.635 [2024-07-15 16:20:11.448252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.635 [2024-07-15 16:20:11.448269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.635 [2024-07-15 16:20:11.448275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.635 [2024-07-15 16:20:11.460641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.635 [2024-07-15 16:20:11.460658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.635 [2024-07-15 16:20:11.460664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.635 [2024-07-15 16:20:11.473418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.635 [2024-07-15 16:20:11.473434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.635 [2024-07-15 16:20:11.473441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.485128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.485146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.485153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.497148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.497165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.497171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.509644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.509660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.509670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.522305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.522321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.522327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.533704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.533720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.533727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.547293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.547310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.547316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.559394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.559411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.559417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.571515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.571532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.571538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.584438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.584455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.584461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.596498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.596514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.596520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.608088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.608105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.608111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.620540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.620556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.620562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.633011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.633028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.633034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.645133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.645150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.645157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.656835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.656851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.656857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.668899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.668915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.668921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.680461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.680477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.680483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.693303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.693319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.693325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.705364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.705380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.705386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.718119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.895 [2024-07-15 16:20:11.718138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.895 [2024-07-15 16:20:11.718148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.895 [2024-07-15 16:20:11.731003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:35.896 [2024-07-15 16:20:11.731020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.896 [2024-07-15 16:20:11.731026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.743308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.743324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.743331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.755470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.755486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.755492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.767964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.767980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.767986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.780047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.780063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.780069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.792029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.792045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.792051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.803980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.803996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.804002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.815832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.815848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.815854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.828966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.828985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.828992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.840076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.840092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.840098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.852385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.852401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.852407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.864096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.864112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.864118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.876806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.876822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.876828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.890063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.890079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.890085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.901296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.901311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.901317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.913694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.913710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.913716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.925967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.925983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.925989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.937222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.937238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.937244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.950627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.950643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.950649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.962964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.962980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.962986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.975743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.975759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.975765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.156 [2024-07-15 16:20:11.987118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.156 [2024-07-15 16:20:11.987137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.156 [2024-07-15 16:20:11.987143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.416 [2024-07-15 16:20:11.998738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.416 [2024-07-15 16:20:11.998754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.416 [2024-07-15 16:20:11.998760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.416 [2024-07-15 16:20:12.012135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.416 [2024-07-15 16:20:12.012152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.416 [2024-07-15 16:20:12.012158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.416 [2024-07-15 16:20:12.024272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.416 [2024-07-15 16:20:12.024288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.416 [2024-07-15 16:20:12.024294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.416 [2024-07-15 16:20:12.036825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.416 [2024-07-15 16:20:12.036841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.416 [2024-07-15 16:20:12.036850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.416 [2024-07-15 16:20:12.047953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.416 [2024-07-15 16:20:12.047970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.416 [2024-07-15 16:20:12.047976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.416 [2024-07-15 16:20:12.062379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14b68e0) 00:28:36.416 [2024-07-15 16:20:12.062395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.416 [2024-07-15 16:20:12.062401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.416 00:28:36.416 Latency(us) 00:28:36.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.416 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:36.416 nvme0n1 : 2.00 20683.24 80.79 0.00 0.00 6182.08 3631.79 21408.43 00:28:36.416 =================================================================================================================== 00:28:36.416 Total : 20683.24 80.79 0.00 0.00 6182.08 3631.79 21408.43 00:28:36.416 0 00:28:36.416 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:36.416 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:36.416 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:36.416 | .driver_specific 00:28:36.416 | .nvme_error 00:28:36.416 | .status_code 00:28:36.416 | .command_transient_transport_error' 00:28:36.416 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:36.416 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:28:36.416 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2461962 00:28:36.416 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2461962 ']' 00:28:36.416 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2461962 00:28:36.416 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:36.416 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:36.416 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2461962 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2461962' 00:28:36.676 killing process with pid 2461962 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2461962 00:28:36.676 Received shutdown signal, test time was about 2.000000 seconds 00:28:36.676 00:28:36.676 Latency(us) 00:28:36.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.676 =================================================================================================================== 00:28:36.676 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2461962 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2462649 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2462649 /var/tmp/bperf.sock 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2462649 ']' 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:36.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:36.676 16:20:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.676 [2024-07-15 16:20:12.466372] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:28:36.676 [2024-07-15 16:20:12.466428] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2462649 ] 00:28:36.676 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:36.676 Zero copy mechanism will not be used. 00:28:36.676 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.936 [2024-07-15 16:20:12.540186] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.936 [2024-07-15 16:20:12.593389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.505 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:37.505 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:37.505 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:37.505 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:37.765 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:37.765 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.765 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.765 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.765 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.765 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.765 nvme0n1 00:28:38.024 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:38.024 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.024 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:38.024 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.024 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:38.024 16:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:38.024 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:38.024 Zero copy mechanism will not be used. 00:28:38.024 Running I/O for 2 seconds... 00:28:38.024 [2024-07-15 16:20:13.735530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.024 [2024-07-15 16:20:13.735560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.024 [2024-07-15 16:20:13.735568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.024 [2024-07-15 16:20:13.750558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.024 [2024-07-15 16:20:13.750577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.024 [2024-07-15 16:20:13.750584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.024 [2024-07-15 16:20:13.763188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.024 [2024-07-15 16:20:13.763209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.024 [2024-07-15 16:20:13.763215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.024 [2024-07-15 16:20:13.776232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.024 [2024-07-15 16:20:13.776251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.024 [2024-07-15 16:20:13.776258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.024 [2024-07-15 16:20:13.790227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.024 [2024-07-15 16:20:13.790245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.024 [2024-07-15 16:20:13.790251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.024 [2024-07-15 16:20:13.804668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.024 [2024-07-15 16:20:13.804685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.024 [2024-07-15 16:20:13.804692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.024 [2024-07-15 16:20:13.819727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.024 [2024-07-15 16:20:13.819745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.024 [2024-07-15 16:20:13.819751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.024 [2024-07-15 16:20:13.834875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.024 [2024-07-15 16:20:13.834892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.024 [2024-07-15 16:20:13.834898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.024 [2024-07-15 16:20:13.850864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.024 [2024-07-15 16:20:13.850881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.024 [2024-07-15 16:20:13.850888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.024 [2024-07-15 16:20:13.864084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.024 [2024-07-15 16:20:13.864101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.024 [2024-07-15 16:20:13.864108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.285 [2024-07-15 16:20:13.877534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.285 [2024-07-15 16:20:13.877551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.285 [2024-07-15 16:20:13.877558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.285 [2024-07-15 16:20:13.893021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.285 [2024-07-15 16:20:13.893039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.285 [2024-07-15 16:20:13.893045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.285 [2024-07-15 16:20:13.907255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.285 [2024-07-15 16:20:13.907273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.285 [2024-07-15 16:20:13.907279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.285 [2024-07-15 16:20:13.921143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.285 [2024-07-15 16:20:13.921160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.285 [2024-07-15 16:20:13.921166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.285 [2024-07-15 16:20:13.934585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.285 [2024-07-15 16:20:13.934603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.285 [2024-07-15 16:20:13.934610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.285 [2024-07-15 16:20:13.947244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.285 [2024-07-15 16:20:13.947261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.285 [2024-07-15 16:20:13.947271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.285 [2024-07-15 16:20:13.953685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.285 [2024-07-15 16:20:13.953702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.285 [2024-07-15 16:20:13.953708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.285 [2024-07-15 16:20:13.968925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.285 [2024-07-15 16:20:13.968942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.285 [2024-07-15 16:20:13.968948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.285 [2024-07-15 16:20:13.980063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.285 [2024-07-15 16:20:13.980080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.285 [2024-07-15 16:20:13.980086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.285 [2024-07-15 16:20:13.994179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.285 [2024-07-15 16:20:13.994196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.285 [2024-07-15 16:20:13.994202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.285 [2024-07-15 16:20:14.008797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.285 [2024-07-15 16:20:14.008814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.285 [2024-07-15 16:20:14.008820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.285 [2024-07-15 16:20:14.021115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.285 [2024-07-15 16:20:14.021137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.285 [2024-07-15 16:20:14.021143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.285 [2024-07-15 16:20:14.036119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.286 [2024-07-15 16:20:14.036141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.286 [2024-07-15 16:20:14.036147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.286 [2024-07-15 16:20:14.050875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.286 [2024-07-15 16:20:14.050892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.286 [2024-07-15 16:20:14.050899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.286 [2024-07-15 16:20:14.065371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.286 [2024-07-15 16:20:14.065388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.286 [2024-07-15 16:20:14.065394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.286 [2024-07-15 16:20:14.078998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.286 [2024-07-15 16:20:14.079015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.286 [2024-07-15 16:20:14.079022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.286 [2024-07-15 16:20:14.091937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.286 [2024-07-15 16:20:14.091955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.286 [2024-07-15 16:20:14.091961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.286 [2024-07-15 16:20:14.106465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.286 [2024-07-15 16:20:14.106482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.286 [2024-07-15 16:20:14.106489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.286 [2024-07-15 16:20:14.120003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.286 [2024-07-15 16:20:14.120019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.286 [2024-07-15 16:20:14.120025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.132615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.132633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.132639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.146139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.146156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.146162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.159823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.159841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.159847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.174869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.174886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.174896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.186979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.186997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.187003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.202746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.202764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.202771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.215953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.215971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.215978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.231436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.231454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.231460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.246753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.246771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.246777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.260826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.260844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.260851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.276407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.276424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.276430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.290372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.290391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.290397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.306222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.306243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.306249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.318696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.318713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.318719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.332572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.332589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.332595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.346659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.346676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.346682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.361052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.361069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.361075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.376032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.376050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.376055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.547 [2024-07-15 16:20:14.387571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.547 [2024-07-15 16:20:14.387588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.547 [2024-07-15 16:20:14.387594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.399251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.399269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.399275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.412304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.412321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.412327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.426966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.426984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.426990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.441178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.441195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.441201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.457384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.457401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.457407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.471177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.471194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.471200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.483277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.483294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.483300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.494579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.494596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.494602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.505285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.505302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.505308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.516106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.516128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.516134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.531380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.531398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.531407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.542460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.542478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.542484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.555998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.556016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.556022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.570844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.570862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.570868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.585846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.585864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.585870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.598415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.598432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.598438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.611165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.611182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.611188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.625924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.625941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.625947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.807 [2024-07-15 16:20:14.639947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:38.807 [2024-07-15 16:20:14.639964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.807 [2024-07-15 16:20:14.639971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.068 [2024-07-15 16:20:14.654428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.068 [2024-07-15 16:20:14.654446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.068 [2024-07-15 16:20:14.654453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.068 [2024-07-15 16:20:14.667478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.068 [2024-07-15 16:20:14.667495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.068 [2024-07-15 16:20:14.667501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.069 [2024-07-15 16:20:14.681622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.069 [2024-07-15 16:20:14.681640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.069 [2024-07-15 16:20:14.681646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.069 [2024-07-15 16:20:14.696852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.069 [2024-07-15 16:20:14.696870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.069 [2024-07-15 16:20:14.696875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.069 [2024-07-15 16:20:14.710416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.069 [2024-07-15 16:20:14.710433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.069 [2024-07-15 16:20:14.710440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.069 [2024-07-15 16:20:14.722655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.069 [2024-07-15 16:20:14.722673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.069 [2024-07-15 16:20:14.722679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.069 [2024-07-15 16:20:14.736338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.069 [2024-07-15 16:20:14.736355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.069 [2024-07-15 16:20:14.736361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.069 [2024-07-15 16:20:14.750090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.069 [2024-07-15 16:20:14.750107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.069 [2024-07-15 16:20:14.750114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.069 [2024-07-15 16:20:14.766485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.069 [2024-07-15 16:20:14.766502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.069 [2024-07-15 16:20:14.766512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.069 [2024-07-15 16:20:14.778817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.069 [2024-07-15 16:20:14.778835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.069 [2024-07-15 16:20:14.778841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.069 [2024-07-15 16:20:14.790317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.069 [2024-07-15 16:20:14.790333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.069 [2024-07-15 16:20:14.790340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.069 [2024-07-15 16:20:14.804263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.069 [2024-07-15 16:20:14.804279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.069 [2024-07-15 16:20:14.804285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.069 [2024-07-15 16:20:14.817177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.069 [2024-07-15 16:20:14.817195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.069 [2024-07-15 16:20:14.817201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.069 [2024-07-15 16:20:14.830002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.069 [2024-07-15 16:20:14.830020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.069 [2024-07-15 16:20:14.830027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.069 [2024-07-15 16:20:14.842727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.069 [2024-07-15 16:20:14.842744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.069 [2024-07-15 16:20:14.842750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.069 [2024-07-15 16:20:14.855772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.069 [2024-07-15 16:20:14.855790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.069 [2024-07-15 16:20:14.855796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.069 [2024-07-15 16:20:14.868321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.069 [2024-07-15 16:20:14.868339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.069 [2024-07-15 16:20:14.868345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.069 [2024-07-15 16:20:14.882653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.069 [2024-07-15 16:20:14.882674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.069 [2024-07-15 16:20:14.882680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.069 [2024-07-15 16:20:14.896086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.069 [2024-07-15 16:20:14.896104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.069 [2024-07-15 16:20:14.896110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.330 [2024-07-15 16:20:14.911265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.330 [2024-07-15 16:20:14.911282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-07-15 16:20:14.911288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.330 [2024-07-15 16:20:14.927530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.330 [2024-07-15 16:20:14.927546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-07-15 16:20:14.927552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.330 [2024-07-15 16:20:14.942434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.330 [2024-07-15 16:20:14.942452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-07-15 16:20:14.942458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.330 [2024-07-15 16:20:14.957891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.330 [2024-07-15 16:20:14.957909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-07-15 16:20:14.957915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.330 [2024-07-15 16:20:14.972376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.330 [2024-07-15 16:20:14.972393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-07-15 16:20:14.972399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.330 [2024-07-15 16:20:14.986337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.330 [2024-07-15 16:20:14.986354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-07-15 16:20:14.986360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.330 [2024-07-15 16:20:15.001745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.330 [2024-07-15 16:20:15.001762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-07-15 16:20:15.001768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.330 [2024-07-15 16:20:15.015456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.330 [2024-07-15 16:20:15.015473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-07-15 16:20:15.015480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.330 [2024-07-15 16:20:15.028148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.330 [2024-07-15 16:20:15.028165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-07-15 16:20:15.028171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.330 [2024-07-15 16:20:15.041010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.330 [2024-07-15 16:20:15.041028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-07-15 16:20:15.041034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.330 [2024-07-15 16:20:15.052569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.330 [2024-07-15 16:20:15.052586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-07-15 16:20:15.052592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.330 [2024-07-15 16:20:15.066739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.330 [2024-07-15 16:20:15.066756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-07-15 16:20:15.066762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.330 [2024-07-15 16:20:15.081513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.330 [2024-07-15 16:20:15.081529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-07-15 16:20:15.081535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.331 [2024-07-15 16:20:15.094119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.331 [2024-07-15 16:20:15.094141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.331 [2024-07-15 16:20:15.094147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.331 [2024-07-15 16:20:15.108250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.331 [2024-07-15 16:20:15.108267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.331 [2024-07-15 16:20:15.108273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.331 [2024-07-15 16:20:15.122404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.331 [2024-07-15 16:20:15.122424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.331 [2024-07-15 16:20:15.122430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.331 [2024-07-15 16:20:15.138214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.331 [2024-07-15 16:20:15.138231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.331 [2024-07-15 16:20:15.138237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.331 [2024-07-15 16:20:15.154083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.331 [2024-07-15 16:20:15.154100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.331 [2024-07-15 16:20:15.154106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.331 [2024-07-15 16:20:15.168347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.331 [2024-07-15 16:20:15.168364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.331 [2024-07-15 16:20:15.168371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.180120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.180141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.180148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.195981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.195998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.196004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.211318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.211335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.211341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.224524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.224541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.224547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.239390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.239407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.239413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.253116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.253137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.253143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.265241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.265258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.265264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.279887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.279904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.279910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.294111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.294132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.294139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.307925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.307942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.307948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.323231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.323248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.323254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.338845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.338862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.338868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.353789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.353806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.353812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.368753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.368770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.368779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.384450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.384467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.384473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.398786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.398803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.398810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.412935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.412952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.412958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.592 [2024-07-15 16:20:15.427453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.592 [2024-07-15 16:20:15.427470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.592 [2024-07-15 16:20:15.427476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.441370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.441387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.441393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.456500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.456517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.456523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.470833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.470850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.470856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.486005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.486023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.486030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.499942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.499961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.499967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.514593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.514610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.514617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.525638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.525655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.525661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.537932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.537949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.537955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.550810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.550827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.550833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.563242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.563259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.563265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.576262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.576278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.576284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.588042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.588059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.588065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.603792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.603809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.603818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.618182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.618200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.618206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.633268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.633285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.633291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.645989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.646007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.646013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.659742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.659759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.659765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.673091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.673107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.673114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.853 [2024-07-15 16:20:15.687673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:39.853 [2024-07-15 16:20:15.687691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.853 [2024-07-15 16:20:15.687697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.114 [2024-07-15 16:20:15.702116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:40.114 [2024-07-15 16:20:15.702138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.114 [2024-07-15 16:20:15.702145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.114 [2024-07-15 16:20:15.713215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f57b80) 00:28:40.114 [2024-07-15 16:20:15.713232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.114 [2024-07-15 16:20:15.713238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.114 00:28:40.115 Latency(us) 00:28:40.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.115 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:40.115 nvme0n1 : 2.00 2235.00 279.37 0.00 0.00 7154.77 3659.09 16493.23 00:28:40.115 =================================================================================================================== 00:28:40.115 Total : 2235.00 279.37 0.00 0.00 7154.77 3659.09 16493.23 00:28:40.115 0 00:28:40.115 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:40.115 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:40.115 | .driver_specific 00:28:40.115 | .nvme_error 00:28:40.115 | .status_code 00:28:40.115 | .command_transient_transport_error' 00:28:40.115 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:40.115 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:40.115 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:28:40.115 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2462649 00:28:40.115 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2462649 ']' 00:28:40.115 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2462649 00:28:40.115 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:40.115 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:40.115 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2462649 00:28:40.115 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:40.115 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:40.115 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2462649' 00:28:40.115 killing process with pid 2462649 00:28:40.115 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2462649 00:28:40.115 Received shutdown signal, test time was about 2.000000 seconds 00:28:40.115 00:28:40.115 Latency(us) 00:28:40.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.115 =================================================================================================================== 00:28:40.115 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:40.115 16:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2462649 00:28:40.376 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:40.376 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:40.376 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:40.376 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:40.376 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:40.376 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2463352 00:28:40.376 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2463352 /var/tmp/bperf.sock 00:28:40.376 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2463352 ']' 00:28:40.376 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:40.376 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:40.376 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:40.376 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:40.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:40.376 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:40.376 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.376 [2024-07-15 16:20:16.110664] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:28:40.376 [2024-07-15 16:20:16.110720] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2463352 ] 00:28:40.376 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.376 [2024-07-15 16:20:16.187192] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.636 [2024-07-15 16:20:16.238984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.208 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:41.208 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:41.208 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:41.208 16:20:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:41.208 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:41.208 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.208 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.208 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.208 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:41.208 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:41.468 nvme0n1 00:28:41.468 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:41.468 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.468 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.468 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.468 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:41.468 16:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:41.734 Running I/O for 2 seconds... 00:28:41.734 [2024-07-15 16:20:17.377295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190eb760 00:28:41.734 [2024-07-15 16:20:17.379062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.734 [2024-07-15 16:20:17.379090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:41.734 [2024-07-15 16:20:17.388041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e6fa8 00:28:41.734 [2024-07-15 16:20:17.389294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.734 [2024-07-15 16:20:17.389314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:41.734 [2024-07-15 16:20:17.401208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ea248 00:28:41.734 [2024-07-15 16:20:17.402929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.734 [2024-07-15 16:20:17.402945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:41.734 [2024-07-15 16:20:17.411092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ea680 00:28:41.734 [2024-07-15 16:20:17.412231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.734 [2024-07-15 16:20:17.412246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:41.734 [2024-07-15 16:20:17.423676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190fe720 00:28:41.734 [2024-07-15 16:20:17.424941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.734 [2024-07-15 16:20:17.424958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:41.734 [2024-07-15 16:20:17.435460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ea680 00:28:41.734 [2024-07-15 16:20:17.436576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.734 [2024-07-15 16:20:17.436592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:41.734 [2024-07-15 16:20:17.448797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e01f8 00:28:41.734 [2024-07-15 16:20:17.450670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.734 [2024-07-15 16:20:17.450687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:41.734 [2024-07-15 16:20:17.459047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e6738 00:28:41.734 [2024-07-15 16:20:17.460166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.734 [2024-07-15 16:20:17.460182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:41.734 [2024-07-15 16:20:17.470066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190feb58 00:28:41.734 [2024-07-15 16:20:17.471177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.734 [2024-07-15 16:20:17.471193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:41.734 [2024-07-15 16:20:17.481826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190fd640 00:28:41.734 [2024-07-15 16:20:17.483013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.734 [2024-07-15 16:20:17.483028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:41.734 [2024-07-15 16:20:17.494426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e95a0 00:28:41.734 [2024-07-15 16:20:17.495602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.734 [2024-07-15 16:20:17.495618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:41.734 [2024-07-15 16:20:17.505474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190dece0 00:28:41.734 [2024-07-15 16:20:17.506548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.734 [2024-07-15 16:20:17.506563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:41.734 [2024-07-15 16:20:17.517189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e84c0 00:28:41.734 [2024-07-15 16:20:17.518262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.734 [2024-07-15 16:20:17.518277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:41.734 [2024-07-15 16:20:17.529699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190df118 00:28:41.734 [2024-07-15 16:20:17.530761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.734 [2024-07-15 16:20:17.530776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:41.734 [2024-07-15 16:20:17.542970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e88f8 00:28:41.734 [2024-07-15 16:20:17.544692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.734 [2024-07-15 16:20:17.544708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:41.734 [2024-07-15 16:20:17.553206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190de470 00:28:41.734 [2024-07-15 16:20:17.554411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.734 [2024-07-15 16:20:17.554426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:41.734 [2024-07-15 16:20:17.564217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e73e0 00:28:41.734 [2024-07-15 16:20:17.565286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.734 [2024-07-15 16:20:17.565302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.576716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190dece0 00:28:42.047 [2024-07-15 16:20:17.577857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.047 [2024-07-15 16:20:17.577873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.590046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ddc00 00:28:42.047 [2024-07-15 16:20:17.591860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.047 [2024-07-15 16:20:17.591875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.601793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.047 [2024-07-15 16:20:17.603491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.047 [2024-07-15 16:20:17.603505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.611280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e7c50 00:28:42.047 [2024-07-15 16:20:17.612320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.047 [2024-07-15 16:20:17.612335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.625295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190de8a8 00:28:42.047 [2024-07-15 16:20:17.627045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.047 [2024-07-15 16:20:17.627061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.635937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f1430 00:28:42.047 [2024-07-15 16:20:17.637145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.047 [2024-07-15 16:20:17.637160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.649410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190fc560 00:28:42.047 [2024-07-15 16:20:17.651337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.047 [2024-07-15 16:20:17.651352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.659697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190fac10 00:28:42.047 [2024-07-15 16:20:17.661007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.047 [2024-07-15 16:20:17.661022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.673004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f20d8 00:28:42.047 [2024-07-15 16:20:17.674957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.047 [2024-07-15 16:20:17.674972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.683624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190edd58 00:28:42.047 [2024-07-15 16:20:17.685090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.047 [2024-07-15 16:20:17.685106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.694757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e4140 00:28:42.047 [2024-07-15 16:20:17.696094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.047 [2024-07-15 16:20:17.696112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.706475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f5378 00:28:42.047 [2024-07-15 16:20:17.707921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.047 [2024-07-15 16:20:17.707937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.718982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f5378 00:28:42.047 [2024-07-15 16:20:17.720389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.047 [2024-07-15 16:20:17.720404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.729947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e4140 00:28:42.047 [2024-07-15 16:20:17.731280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.047 [2024-07-15 16:20:17.731295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.742485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f6cc8 00:28:42.047 [2024-07-15 16:20:17.743817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.047 [2024-07-15 16:20:17.743833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.755819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e84c0 00:28:42.047 [2024-07-15 16:20:17.757917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.047 [2024-07-15 16:20:17.757932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.766159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f7da8 00:28:42.047 [2024-07-15 16:20:17.767480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.047 [2024-07-15 16:20:17.767495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:42.047 [2024-07-15 16:20:17.777964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f8e88 00:28:42.047 [2024-07-15 16:20:17.779378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.048 [2024-07-15 16:20:17.779393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:42.048 [2024-07-15 16:20:17.788907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f96f8 00:28:42.048 [2024-07-15 16:20:17.790402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.048 [2024-07-15 16:20:17.790417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:42.048 [2024-07-15 16:20:17.799550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:42.048 [2024-07-15 16:20:17.800372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.048 [2024-07-15 16:20:17.800388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:42.048 [2024-07-15 16:20:17.810654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ddc00 00:28:42.048 [2024-07-15 16:20:17.811570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.048 [2024-07-15 16:20:17.811585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:42.048 [2024-07-15 16:20:17.823188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ddc00 00:28:42.048 [2024-07-15 16:20:17.824121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.048 [2024-07-15 16:20:17.824139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:42.048 [2024-07-15 16:20:17.834978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ddc00 00:28:42.048 [2024-07-15 16:20:17.835776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.048 [2024-07-15 16:20:17.835791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:42.048 [2024-07-15 16:20:17.846733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ddc00 00:28:42.048 [2024-07-15 16:20:17.847608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.048 [2024-07-15 16:20:17.847624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:42.048 [2024-07-15 16:20:17.860027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f46d0 00:28:42.048 [2024-07-15 16:20:17.861474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.048 [2024-07-15 16:20:17.861489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:42.048 [2024-07-15 16:20:17.870204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ebfd0 00:28:42.048 [2024-07-15 16:20:17.871125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.048 [2024-07-15 16:20:17.871140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:42.048 [2024-07-15 16:20:17.881976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ebfd0 00:28:42.048 [2024-07-15 16:20:17.882863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.048 [2024-07-15 16:20:17.882878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:17.893739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ebfd0 00:28:42.310 [2024-07-15 16:20:17.894518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:17.894534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:17.907004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ebfd0 00:28:42.310 [2024-07-15 16:20:17.908544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:17.908560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:17.917257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190fd208 00:28:42.310 [2024-07-15 16:20:17.918020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:17.918036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:17.929007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190fd208 00:28:42.310 [2024-07-15 16:20:17.929785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:17.929800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:17.942292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f0bc0 00:28:42.310 [2024-07-15 16:20:17.943814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:17.943829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:17.952568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190dece0 00:28:42.310 [2024-07-15 16:20:17.953323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:17.953339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:17.964320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190dece0 00:28:42.310 [2024-07-15 16:20:17.965152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:17.965167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:17.977596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190dece0 00:28:42.310 [2024-07-15 16:20:17.979104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:17.979120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:17.987128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f6020 00:28:42.310 [2024-07-15 16:20:17.987880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:17.987895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:18.001127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f1430 00:28:42.310 [2024-07-15 16:20:18.002608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:18.002626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:18.010579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e01f8 00:28:42.310 [2024-07-15 16:20:18.011417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:18.011432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:18.024588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e01f8 00:28:42.310 [2024-07-15 16:20:18.026064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:18.026080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:18.034781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190df988 00:28:42.310 [2024-07-15 16:20:18.035500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:18.035515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:18.048106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ecc78 00:28:42.310 [2024-07-15 16:20:18.049481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:18.049496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:18.059783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e73e0 00:28:42.310 [2024-07-15 16:20:18.061129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:18.061144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:18.071682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f0ff8 00:28:42.310 [2024-07-15 16:20:18.073134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:18.073150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:18.085674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190df118 00:28:42.310 [2024-07-15 16:20:18.087654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:18.087669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:18.095177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190fcdd0 00:28:42.310 [2024-07-15 16:20:18.096495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:18.096511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:18.107731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f0ff8 00:28:42.310 [2024-07-15 16:20:18.109057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:18.109073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:18.121042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190df118 00:28:42.310 [2024-07-15 16:20:18.123019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:18.123034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:18.131210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190de8a8 00:28:42.310 [2024-07-15 16:20:18.132622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:18.132638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.310 [2024-07-15 16:20:18.142994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190de8a8 00:28:42.310 [2024-07-15 16:20:18.144300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.310 [2024-07-15 16:20:18.144316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.156252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190de8a8 00:28:42.572 [2024-07-15 16:20:18.158343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.158359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.165439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190de038 00:28:42.572 [2024-07-15 16:20:18.166398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.166413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.179709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f0788 00:28:42.572 [2024-07-15 16:20:18.181752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.181767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.190403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190fdeb0 00:28:42.572 [2024-07-15 16:20:18.191955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.191971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.199888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e3d08 00:28:42.572 [2024-07-15 16:20:18.200783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.200798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.211817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f0bc0 00:28:42.572 [2024-07-15 16:20:18.212695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.212710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.223618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ddc00 00:28:42.572 [2024-07-15 16:20:18.224517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.224532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.235427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f5378 00:28:42.572 [2024-07-15 16:20:18.236245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.236261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.246408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:42.572 [2024-07-15 16:20:18.247284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.247299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.258943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:42.572 [2024-07-15 16:20:18.259721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.259737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.272175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e7c50 00:28:42.572 [2024-07-15 16:20:18.273683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.273699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.282391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e95a0 00:28:42.572 [2024-07-15 16:20:18.283151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.283166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.293293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ef270 00:28:42.572 [2024-07-15 16:20:18.294026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.294042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.307318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e73e0 00:28:42.572 [2024-07-15 16:20:18.308799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.308817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.317593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ecc78 00:28:42.572 [2024-07-15 16:20:18.318417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.318433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.329363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ecc78 00:28:42.572 [2024-07-15 16:20:18.330072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.330088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.341109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ecc78 00:28:42.572 [2024-07-15 16:20:18.341827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.341843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.354305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ecc78 00:28:42.572 [2024-07-15 16:20:18.355769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.355784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.366843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ecc78 00:28:42.572 [2024-07-15 16:20:18.368207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.368223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.378625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ecc78 00:28:42.572 [2024-07-15 16:20:18.379972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.379988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.390381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ecc78 00:28:42.572 [2024-07-15 16:20:18.391747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.391763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.572 [2024-07-15 16:20:18.401309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.572 [2024-07-15 16:20:18.402734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.572 [2024-07-15 16:20:18.402750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.413803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.415137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.415152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.425571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.426908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.426924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.437335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.438778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.438793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.449141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.450471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.450487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.460897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.462287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.462302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.472670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.473994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.474009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.484412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.485740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.485755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.496199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.497640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.497655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.507956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.509399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.509414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.519716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.521153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.521168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.531455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.532889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.532904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.543173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.544508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.544524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.554929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.556386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.556401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.566706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.568151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.568166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.578496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.579933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.579948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.590254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.591694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.591709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.601979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.603416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.603431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.613721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.615119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.615138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.625492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.626926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.626942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.637229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8088 00:28:42.834 [2024-07-15 16:20:18.638564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.638579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.648930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f5378 00:28:42.834 [2024-07-15 16:20:18.650384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.650399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.659884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e6b70 00:28:42.834 [2024-07-15 16:20:18.661254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.661268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:42.834 [2024-07-15 16:20:18.672423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e7818 00:28:42.834 [2024-07-15 16:20:18.673856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.834 [2024-07-15 16:20:18.673872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.683442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.684869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.684884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.695937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.697383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.697398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.707669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.709090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.709105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.719397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.720817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.720835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.731114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.732531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.732546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.742858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.744247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.744262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.754626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.756042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.756056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.766370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.767789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.767805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.778228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.779644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.779660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.789992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.791422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.791437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.801721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.803163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.803178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.813474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.814893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.814909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.825237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.826656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.826672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.836977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.838394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.838408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.848714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.850132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.850146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.860436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.861851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.861866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.872193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.873609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.873625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.883936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.096 [2024-07-15 16:20:18.885373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.096 [2024-07-15 16:20:18.885388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.096 [2024-07-15 16:20:18.894888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f0350 00:28:43.096 [2024-07-15 16:20:18.896269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.097 [2024-07-15 16:20:18.896284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:43.097 [2024-07-15 16:20:18.908936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e7818 00:28:43.097 [2024-07-15 16:20:18.911001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.097 [2024-07-15 16:20:18.911016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:43.097 [2024-07-15 16:20:18.919108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ee5c8 00:28:43.097 [2024-07-15 16:20:18.920519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.097 [2024-07-15 16:20:18.920535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:43.097 [2024-07-15 16:20:18.930854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e8d30 00:28:43.097 [2024-07-15 16:20:18.932273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.097 [2024-07-15 16:20:18.932288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:43.358 [2024-07-15 16:20:18.944185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e9e10 00:28:43.358 [2024-07-15 16:20:18.946239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.358 [2024-07-15 16:20:18.946254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:43.358 [2024-07-15 16:20:18.954408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f6458 00:28:43.358 [2024-07-15 16:20:18.955828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.358 [2024-07-15 16:20:18.955844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:43.358 [2024-07-15 16:20:18.966163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f6458 00:28:43.358 [2024-07-15 16:20:18.967583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.358 [2024-07-15 16:20:18.967599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:43.358 [2024-07-15 16:20:18.977902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f6458 00:28:43.358 [2024-07-15 16:20:18.979247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.358 [2024-07-15 16:20:18.979262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:43.358 [2024-07-15 16:20:18.989651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f6458 00:28:43.358 [2024-07-15 16:20:18.991046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.358 [2024-07-15 16:20:18.991060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:43.358 [2024-07-15 16:20:19.001396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f6458 00:28:43.358 [2024-07-15 16:20:19.002791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.358 [2024-07-15 16:20:19.002806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:43.358 [2024-07-15 16:20:19.013142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f6458 00:28:43.358 [2024-07-15 16:20:19.014503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.358 [2024-07-15 16:20:19.014518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:43.358 [2024-07-15 16:20:19.024860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f6458 00:28:43.358 [2024-07-15 16:20:19.026224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.358 [2024-07-15 16:20:19.026245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:43.358 [2024-07-15 16:20:19.036601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f6458 00:28:43.358 [2024-07-15 16:20:19.037992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.358 [2024-07-15 16:20:19.038007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:43.358 [2024-07-15 16:20:19.048310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f6458 00:28:43.358 [2024-07-15 16:20:19.049706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.358 [2024-07-15 16:20:19.049722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:43.358 [2024-07-15 16:20:19.060045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f6458 00:28:43.358 [2024-07-15 16:20:19.061446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.358 [2024-07-15 16:20:19.061460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:43.358 [2024-07-15 16:20:19.072010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f6458 00:28:43.358 [2024-07-15 16:20:19.073378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.358 [2024-07-15 16:20:19.073393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:43.358 [2024-07-15 16:20:19.083155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e0ea0 00:28:43.358 [2024-07-15 16:20:19.084545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.358 [2024-07-15 16:20:19.084561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:43.358 [2024-07-15 16:20:19.093005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f0350 00:28:43.358 [2024-07-15 16:20:19.093904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.358 [2024-07-15 16:20:19.093918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:43.358 [2024-07-15 16:20:19.105478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e1f80 00:28:43.358 [2024-07-15 16:20:19.106393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.358 [2024-07-15 16:20:19.106408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:43.358 [2024-07-15 16:20:19.117227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e1f80 00:28:43.358 [2024-07-15 16:20:19.118109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.358 [2024-07-15 16:20:19.118127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:43.358 [2024-07-15 16:20:19.129025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e1f80 00:28:43.358 [2024-07-15 16:20:19.129940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.358 [2024-07-15 16:20:19.129956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:43.359 [2024-07-15 16:20:19.140796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e1f80 00:28:43.359 [2024-07-15 16:20:19.141683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.359 [2024-07-15 16:20:19.141699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:43.359 [2024-07-15 16:20:19.152575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e1f80 00:28:43.359 [2024-07-15 16:20:19.153462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.359 [2024-07-15 16:20:19.153478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:43.359 [2024-07-15 16:20:19.164341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e1f80 00:28:43.359 [2024-07-15 16:20:19.165206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.359 [2024-07-15 16:20:19.165221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:43.359 [2024-07-15 16:20:19.177593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f0350 00:28:43.359 [2024-07-15 16:20:19.179148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.359 [2024-07-15 16:20:19.179162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:43.359 [2024-07-15 16:20:19.187844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e4de8 00:28:43.359 [2024-07-15 16:20:19.188716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.359 [2024-07-15 16:20:19.188731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:43.620 [2024-07-15 16:20:19.199622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e4de8 00:28:43.620 [2024-07-15 16:20:19.200498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.620 [2024-07-15 16:20:19.200513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:43.620 [2024-07-15 16:20:19.211402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e4de8 00:28:43.620 [2024-07-15 16:20:19.212167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.620 [2024-07-15 16:20:19.212182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:43.620 [2024-07-15 16:20:19.224621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e4de8 00:28:43.620 [2024-07-15 16:20:19.226144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.620 [2024-07-15 16:20:19.226159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:43.620 [2024-07-15 16:20:19.234828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190ee5c8 00:28:43.620 [2024-07-15 16:20:19.235593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.620 [2024-07-15 16:20:19.235609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:43.620 [2024-07-15 16:20:19.245728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e7818 00:28:43.620 [2024-07-15 16:20:19.246571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.620 [2024-07-15 16:20:19.246585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:43.620 [2024-07-15 16:20:19.258232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e7818 00:28:43.620 [2024-07-15 16:20:19.258971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.620 [2024-07-15 16:20:19.258986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:43.620 [2024-07-15 16:20:19.269999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e7818 00:28:43.620 [2024-07-15 16:20:19.270841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.620 [2024-07-15 16:20:19.270856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:43.620 [2024-07-15 16:20:19.281760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e7818 00:28:43.620 [2024-07-15 16:20:19.282605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.620 [2024-07-15 16:20:19.282619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:43.620 [2024-07-15 16:20:19.293527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e7818 00:28:43.620 [2024-07-15 16:20:19.294368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.620 [2024-07-15 16:20:19.294383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:43.620 [2024-07-15 16:20:19.304472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e3498 00:28:43.620 [2024-07-15 16:20:19.305205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.620 [2024-07-15 16:20:19.305221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:43.620 [2024-07-15 16:20:19.318544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e23b8 00:28:43.620 [2024-07-15 16:20:19.320035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.620 [2024-07-15 16:20:19.320050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:43.620 [2024-07-15 16:20:19.329196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190f1868 00:28:43.620 [2024-07-15 16:20:19.330187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.620 [2024-07-15 16:20:19.330206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:43.620 [2024-07-15 16:20:19.340366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190df118 00:28:43.620 [2024-07-15 16:20:19.341261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.620 [2024-07-15 16:20:19.341275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:43.620 [2024-07-15 16:20:19.354393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190e38d0 00:28:43.620 [2024-07-15 16:20:19.355933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.620 [2024-07-15 16:20:19.355948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:43.620 [2024-07-15 16:20:19.363874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1272aa0) with pdu=0x2000190fe720 00:28:43.620 [2024-07-15 16:20:19.364762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.620 [2024-07-15 16:20:19.364778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:43.620 00:28:43.620 Latency(us) 00:28:43.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.620 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:43.620 nvme0n1 : 2.01 21620.94 84.46 0.00 0.00 5915.37 2239.15 15619.41 00:28:43.620 =================================================================================================================== 00:28:43.620 Total : 21620.94 84.46 0.00 0.00 5915.37 2239.15 15619.41 00:28:43.620 0 00:28:43.620 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:43.620 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:43.620 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:43.620 | .driver_specific 00:28:43.620 | .nvme_error 00:28:43.620 | .status_code 00:28:43.620 | .command_transient_transport_error' 00:28:43.620 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:43.881 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:28:43.881 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2463352 00:28:43.881 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2463352 ']' 00:28:43.881 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2463352 00:28:43.881 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:43.881 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:43.881 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2463352 00:28:43.881 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:43.881 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:43.881 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2463352' 00:28:43.881 killing process with pid 2463352 00:28:43.881 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2463352 00:28:43.881 Received shutdown signal, test time was about 2.000000 seconds 00:28:43.881 00:28:43.881 Latency(us) 00:28:43.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.881 =================================================================================================================== 00:28:43.881 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:43.881 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2463352 00:28:43.881 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:43.881 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:44.141 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:44.141 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:44.141 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:44.141 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2464103 00:28:44.141 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2464103 /var/tmp/bperf.sock 00:28:44.141 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2464103 ']' 00:28:44.141 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:44.141 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:44.141 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:44.141 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:44.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:44.141 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:44.141 16:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.141 [2024-07-15 16:20:19.769322] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:28:44.141 [2024-07-15 16:20:19.769378] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2464103 ] 00:28:44.142 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:44.142 Zero copy mechanism will not be used. 00:28:44.142 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.142 [2024-07-15 16:20:19.841795] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.142 [2024-07-15 16:20:19.894919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.713 16:20:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:44.713 16:20:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:44.713 16:20:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.713 16:20:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.974 16:20:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:44.974 16:20:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.974 16:20:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.974 16:20:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.974 16:20:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.974 16:20:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:45.234 nvme0n1 00:28:45.234 16:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:45.234 16:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.234 16:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:45.234 16:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.234 16:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:45.234 16:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:45.494 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:45.494 Zero copy mechanism will not be used. 00:28:45.494 Running I/O for 2 seconds... 00:28:45.494 [2024-07-15 16:20:21.160404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.494 [2024-07-15 16:20:21.160724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.494 [2024-07-15 16:20:21.160751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.494 [2024-07-15 16:20:21.174697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.494 [2024-07-15 16:20:21.174951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.494 [2024-07-15 16:20:21.174969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.494 [2024-07-15 16:20:21.186463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.494 [2024-07-15 16:20:21.186826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.494 [2024-07-15 16:20:21.186844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.494 [2024-07-15 16:20:21.195633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.494 [2024-07-15 16:20:21.195965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.494 [2024-07-15 16:20:21.195982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.494 [2024-07-15 16:20:21.205061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.494 [2024-07-15 16:20:21.205402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.494 [2024-07-15 16:20:21.205419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.494 [2024-07-15 16:20:21.214745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.494 [2024-07-15 16:20:21.215068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.494 [2024-07-15 16:20:21.215085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.494 [2024-07-15 16:20:21.225025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.494 [2024-07-15 16:20:21.225262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.494 [2024-07-15 16:20:21.225279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.494 [2024-07-15 16:20:21.236336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.494 [2024-07-15 16:20:21.236672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.494 [2024-07-15 16:20:21.236689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.494 [2024-07-15 16:20:21.246984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.494 [2024-07-15 16:20:21.247313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.494 [2024-07-15 16:20:21.247330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.494 [2024-07-15 16:20:21.257131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.494 [2024-07-15 16:20:21.257494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.494 [2024-07-15 16:20:21.257510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.494 [2024-07-15 16:20:21.267708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.494 [2024-07-15 16:20:21.267940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.494 [2024-07-15 16:20:21.267955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.494 [2024-07-15 16:20:21.277110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.494 [2024-07-15 16:20:21.277247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.494 [2024-07-15 16:20:21.277262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.494 [2024-07-15 16:20:21.285993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.495 [2024-07-15 16:20:21.286341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.495 [2024-07-15 16:20:21.286357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.495 [2024-07-15 16:20:21.295252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.495 [2024-07-15 16:20:21.295579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.495 [2024-07-15 16:20:21.295595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.495 [2024-07-15 16:20:21.304933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.495 [2024-07-15 16:20:21.305169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.495 [2024-07-15 16:20:21.305188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.495 [2024-07-15 16:20:21.314845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.495 [2024-07-15 16:20:21.315211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.495 [2024-07-15 16:20:21.315228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.495 [2024-07-15 16:20:21.324320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.495 [2024-07-15 16:20:21.324686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.495 [2024-07-15 16:20:21.324702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.495 [2024-07-15 16:20:21.334878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.495 [2024-07-15 16:20:21.335229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.495 [2024-07-15 16:20:21.335245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.343713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.344038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.344055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.353039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.353396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.353412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.362572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.362897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.362914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.371796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.371987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.372002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.382004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.382325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.382343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.393078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.393524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.393542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.403237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.403490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.403506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.413970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.414359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.414375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.423402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.423741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.423758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.433226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.433552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.433568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.444030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.444391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.444408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.454461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.454723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.454739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.464470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.464725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.464741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.474293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.474512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.474527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.484204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.484579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.484596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.494675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.494893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.494909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.505057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.505525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.505541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.515515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.515847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.515864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.526156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.526403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.526419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.757 [2024-07-15 16:20:21.536314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.757 [2024-07-15 16:20:21.536671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.757 [2024-07-15 16:20:21.536687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.758 [2024-07-15 16:20:21.546784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.758 [2024-07-15 16:20:21.547167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.758 [2024-07-15 16:20:21.547183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.758 [2024-07-15 16:20:21.556652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.758 [2024-07-15 16:20:21.556979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.758 [2024-07-15 16:20:21.556996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.758 [2024-07-15 16:20:21.566210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.758 [2024-07-15 16:20:21.566698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.758 [2024-07-15 16:20:21.566717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.758 [2024-07-15 16:20:21.576967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.758 [2024-07-15 16:20:21.577286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.758 [2024-07-15 16:20:21.577303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.758 [2024-07-15 16:20:21.587180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.758 [2024-07-15 16:20:21.587518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.758 [2024-07-15 16:20:21.587534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.758 [2024-07-15 16:20:21.597092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:45.758 [2024-07-15 16:20:21.597484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.758 [2024-07-15 16:20:21.597500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.020 [2024-07-15 16:20:21.607325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.020 [2024-07-15 16:20:21.607535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.020 [2024-07-15 16:20:21.607551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.020 [2024-07-15 16:20:21.617029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.020 [2024-07-15 16:20:21.617472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.020 [2024-07-15 16:20:21.617488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.020 [2024-07-15 16:20:21.626723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.020 [2024-07-15 16:20:21.627109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.020 [2024-07-15 16:20:21.627130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.020 [2024-07-15 16:20:21.636953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.020 [2024-07-15 16:20:21.637212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.020 [2024-07-15 16:20:21.637228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.020 [2024-07-15 16:20:21.647669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.020 [2024-07-15 16:20:21.648049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.020 [2024-07-15 16:20:21.648065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.020 [2024-07-15 16:20:21.657490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.020 [2024-07-15 16:20:21.657754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.657770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.666773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.667149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.667165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.677039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.677404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.677421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.687139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.687467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.687484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.696658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.696899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.696914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.706764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.707173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.707190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.717415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.717633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.717649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.727644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.728030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.728046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.738667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.738889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.738905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.749410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.749670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.749686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.760996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.761288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.761311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.772969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.773226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.773242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.784402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.784887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.784905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.796423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.796827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.796843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.807874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.808144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.808161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.819537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.819947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.819963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.829162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.829411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.829427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.840790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.841117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.841143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.021 [2024-07-15 16:20:21.853262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.021 [2024-07-15 16:20:21.853647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.021 [2024-07-15 16:20:21.853663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.282 [2024-07-15 16:20:21.863594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.282 [2024-07-15 16:20:21.863901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.282 [2024-07-15 16:20:21.863918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.282 [2024-07-15 16:20:21.873801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.282 [2024-07-15 16:20:21.874060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.282 [2024-07-15 16:20:21.874082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.282 [2024-07-15 16:20:21.883268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.282 [2024-07-15 16:20:21.883730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.282 [2024-07-15 16:20:21.883747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.282 [2024-07-15 16:20:21.893747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.282 [2024-07-15 16:20:21.894017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.282 [2024-07-15 16:20:21.894032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:21.904199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:21.904547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:21.904563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:21.914946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:21.915368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:21.915384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:21.926367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:21.926641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:21.926657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:21.937328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:21.937710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:21.937727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:21.948377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:21.948892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:21.948908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:21.959759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:21.960172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:21.960188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:21.970231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:21.970736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:21.970752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:21.980822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:21.981235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:21.981252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:21.991000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:21.991227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:21.991242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:22.001030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:22.001394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:22.001410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:22.011039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:22.011267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:22.011283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:22.020992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:22.021373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:22.021393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:22.030842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:22.031231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:22.031246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:22.040988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:22.041416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:22.041432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:22.051460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:22.051704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:22.051720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:22.061502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:22.061923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:22.061939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:22.071643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:22.071979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:22.071995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:22.081943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:22.082406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:22.082422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:22.092917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:22.093298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:22.093314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:22.104004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:22.104383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:22.104399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.283 [2024-07-15 16:20:22.114258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.283 [2024-07-15 16:20:22.114639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.283 [2024-07-15 16:20:22.114655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.124253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.124486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.124501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.135104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.135495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.135511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.145784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.146132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.146148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.156531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.156872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.156888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.166950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.167261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.167276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.176886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.177262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.177279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.186679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.186900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.186916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.196339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.196712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.196728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.206498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.206855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.206871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.217153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.217372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.217387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.227652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.228092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.228107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.238231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.238656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.238672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.249389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.249791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.249807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.259917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.260195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.260210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.270733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.271109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.271129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.280582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.280872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.280888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.291452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.291962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.291981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.302192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.302532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.302548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.313054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.544 [2024-07-15 16:20:22.313287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.544 [2024-07-15 16:20:22.313302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.544 [2024-07-15 16:20:22.323726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.545 [2024-07-15 16:20:22.323995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.545 [2024-07-15 16:20:22.324012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.545 [2024-07-15 16:20:22.334723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.545 [2024-07-15 16:20:22.335121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.545 [2024-07-15 16:20:22.335141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.545 [2024-07-15 16:20:22.345214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.545 [2024-07-15 16:20:22.345614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.545 [2024-07-15 16:20:22.345630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.545 [2024-07-15 16:20:22.355648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.545 [2024-07-15 16:20:22.355857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.545 [2024-07-15 16:20:22.355873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.545 [2024-07-15 16:20:22.366583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.545 [2024-07-15 16:20:22.366972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.545 [2024-07-15 16:20:22.366988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.545 [2024-07-15 16:20:22.377092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.545 [2024-07-15 16:20:22.377253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.545 [2024-07-15 16:20:22.377268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.387728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.387976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.387991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.397650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.397988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.398004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.407761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.408143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.408160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.418234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.418527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.418544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.428075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.428495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.428511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.437964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.438337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.438353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.447456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.447700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.447715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.457755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.458083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.458099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.468616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.468963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.468978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.478227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.478654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.478670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.488715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.488978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.488994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.498360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.498568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.498583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.506375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.506834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.506850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.516350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.516754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.516770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.525524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.526015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.526031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.535509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.535921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.535938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.545149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.545530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.545546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.555590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.555800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.805 [2024-07-15 16:20:22.555818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.805 [2024-07-15 16:20:22.566962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.805 [2024-07-15 16:20:22.567243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.806 [2024-07-15 16:20:22.567259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.806 [2024-07-15 16:20:22.576878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.806 [2024-07-15 16:20:22.577217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.806 [2024-07-15 16:20:22.577233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.806 [2024-07-15 16:20:22.587221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.806 [2024-07-15 16:20:22.587495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.806 [2024-07-15 16:20:22.587512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.806 [2024-07-15 16:20:22.597942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.806 [2024-07-15 16:20:22.598136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.806 [2024-07-15 16:20:22.598151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.806 [2024-07-15 16:20:22.607765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.806 [2024-07-15 16:20:22.608111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.806 [2024-07-15 16:20:22.608131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.806 [2024-07-15 16:20:22.618255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.806 [2024-07-15 16:20:22.618538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.806 [2024-07-15 16:20:22.618554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.806 [2024-07-15 16:20:22.629090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.806 [2024-07-15 16:20:22.629380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.806 [2024-07-15 16:20:22.629396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.806 [2024-07-15 16:20:22.639670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:46.806 [2024-07-15 16:20:22.640083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.806 [2024-07-15 16:20:22.640099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.067 [2024-07-15 16:20:22.648580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.067 [2024-07-15 16:20:22.648809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.067 [2024-07-15 16:20:22.648824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.067 [2024-07-15 16:20:22.657886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.067 [2024-07-15 16:20:22.658221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.067 [2024-07-15 16:20:22.658237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.067 [2024-07-15 16:20:22.666467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.067 [2024-07-15 16:20:22.666685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.067 [2024-07-15 16:20:22.666700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.067 [2024-07-15 16:20:22.676082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.067 [2024-07-15 16:20:22.676524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.067 [2024-07-15 16:20:22.676540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.067 [2024-07-15 16:20:22.685413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.067 [2024-07-15 16:20:22.685768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.067 [2024-07-15 16:20:22.685784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.067 [2024-07-15 16:20:22.694856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.067 [2024-07-15 16:20:22.695109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.067 [2024-07-15 16:20:22.695128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.067 [2024-07-15 16:20:22.701739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.067 [2024-07-15 16:20:22.702038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.067 [2024-07-15 16:20:22.702054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.067 [2024-07-15 16:20:22.709389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.067 [2024-07-15 16:20:22.709692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.067 [2024-07-15 16:20:22.709708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.067 [2024-07-15 16:20:22.718820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.067 [2024-07-15 16:20:22.719273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.067 [2024-07-15 16:20:22.719293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.067 [2024-07-15 16:20:22.728892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.067 [2024-07-15 16:20:22.729252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.067 [2024-07-15 16:20:22.729268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.067 [2024-07-15 16:20:22.737895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.067 [2024-07-15 16:20:22.738282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.067 [2024-07-15 16:20:22.738299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.067 [2024-07-15 16:20:22.746691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.067 [2024-07-15 16:20:22.746972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.067 [2024-07-15 16:20:22.746988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.067 [2024-07-15 16:20:22.755612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.067 [2024-07-15 16:20:22.755910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.067 [2024-07-15 16:20:22.755926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.067 [2024-07-15 16:20:22.764998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.067 [2024-07-15 16:20:22.765209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.067 [2024-07-15 16:20:22.765224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.068 [2024-07-15 16:20:22.774251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.068 [2024-07-15 16:20:22.774442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.068 [2024-07-15 16:20:22.774457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.068 [2024-07-15 16:20:22.782686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.068 [2024-07-15 16:20:22.783098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.068 [2024-07-15 16:20:22.783114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.068 [2024-07-15 16:20:22.791917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.068 [2024-07-15 16:20:22.792278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.068 [2024-07-15 16:20:22.792294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.068 [2024-07-15 16:20:22.801297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.068 [2024-07-15 16:20:22.801519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.068 [2024-07-15 16:20:22.801534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.068 [2024-07-15 16:20:22.809946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.068 [2024-07-15 16:20:22.810282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.068 [2024-07-15 16:20:22.810299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.068 [2024-07-15 16:20:22.819493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.068 [2024-07-15 16:20:22.819769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.068 [2024-07-15 16:20:22.819785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.068 [2024-07-15 16:20:22.828408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.068 [2024-07-15 16:20:22.828655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.068 [2024-07-15 16:20:22.828671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.068 [2024-07-15 16:20:22.837692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.068 [2024-07-15 16:20:22.837934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.068 [2024-07-15 16:20:22.837950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.068 [2024-07-15 16:20:22.847589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.068 [2024-07-15 16:20:22.847955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.068 [2024-07-15 16:20:22.847971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.068 [2024-07-15 16:20:22.856966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.068 [2024-07-15 16:20:22.857316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.068 [2024-07-15 16:20:22.857333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.068 [2024-07-15 16:20:22.866175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.068 [2024-07-15 16:20:22.866509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.068 [2024-07-15 16:20:22.866524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.068 [2024-07-15 16:20:22.875487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.068 [2024-07-15 16:20:22.875879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.068 [2024-07-15 16:20:22.875895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.068 [2024-07-15 16:20:22.884940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.068 [2024-07-15 16:20:22.885175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.068 [2024-07-15 16:20:22.885191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.068 [2024-07-15 16:20:22.893030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.068 [2024-07-15 16:20:22.893297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.068 [2024-07-15 16:20:22.893313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.068 [2024-07-15 16:20:22.902078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.068 [2024-07-15 16:20:22.902304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.068 [2024-07-15 16:20:22.902319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:22.912241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:22.912430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:22.912445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:22.920437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:22.920629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:22.920645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:22.929173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:22.929542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:22.929558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:22.937896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:22.938189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:22.938204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:22.946658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:22.946928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:22.946944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:22.955430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:22.955642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:22.955661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:22.965114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:22.965317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:22.965333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:22.975360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:22.975713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:22.975730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:22.987232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:22.987644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:22.987660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:22.997159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:22.997456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:22.997473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:23.007467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:23.007674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:23.007690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:23.017347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:23.017606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:23.017621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:23.027349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:23.027820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:23.027836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:23.037991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:23.038331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:23.038348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:23.048538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:23.048878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:23.048894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:23.058840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:23.059232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:23.059248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:23.069062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:23.069321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:23.069336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:23.080302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:23.080501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:23.080517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:23.090138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:23.090330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:23.090345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:23.100835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:23.101145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:23.101161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:23.111937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:23.112141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:23.112157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:23.123439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:23.123845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:23.123861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.330 [2024-07-15 16:20:23.133988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1367c80) with pdu=0x2000190fef90 00:28:47.330 [2024-07-15 16:20:23.134224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.330 [2024-07-15 16:20:23.134239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.330 00:28:47.330 Latency(us) 00:28:47.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.330 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:47.330 nvme0n1 : 2.00 3044.69 380.59 0.00 0.00 5246.69 3208.53 18240.85 00:28:47.330 =================================================================================================================== 00:28:47.330 Total : 3044.69 380.59 0.00 0.00 5246.69 3208.53 18240.85 00:28:47.330 0 00:28:47.330 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:47.330 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:47.330 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:47.330 | .driver_specific 00:28:47.330 | .nvme_error 00:28:47.330 | .status_code 00:28:47.330 | .command_transient_transport_error' 00:28:47.330 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:47.590 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 196 > 0 )) 00:28:47.590 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2464103 00:28:47.590 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2464103 ']' 00:28:47.590 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2464103 00:28:47.590 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:47.590 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:47.590 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2464103 00:28:47.590 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:47.590 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:47.590 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2464103' 00:28:47.590 killing process with pid 2464103 00:28:47.590 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2464103 00:28:47.590 Received shutdown signal, test time was about 2.000000 seconds 00:28:47.590 00:28:47.590 Latency(us) 00:28:47.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.590 =================================================================================================================== 00:28:47.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:47.590 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2464103 00:28:47.850 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2461835 00:28:47.850 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2461835 ']' 00:28:47.850 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2461835 00:28:47.850 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:47.850 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:47.850 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2461835 00:28:47.850 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:47.850 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:47.850 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2461835' 00:28:47.850 killing process with pid 2461835 00:28:47.850 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2461835 00:28:47.850 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2461835 00:28:47.850 00:28:47.850 real 0m15.988s 00:28:47.850 user 0m31.423s 00:28:47.850 sys 0m3.147s 00:28:47.850 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:47.850 16:20:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.850 ************************************ 00:28:47.850 END TEST nvmf_digest_error 00:28:47.850 ************************************ 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:48.111 rmmod nvme_tcp 00:28:48.111 rmmod nvme_fabrics 00:28:48.111 rmmod nvme_keyring 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2461835 ']' 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2461835 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2461835 ']' 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2461835 00:28:48.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2461835) - No such process 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2461835 is not found' 00:28:48.111 Process with pid 2461835 is not found 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.111 16:20:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.035 16:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:50.035 00:28:50.035 real 0m42.034s 00:28:50.035 user 1m5.741s 00:28:50.035 sys 0m11.728s 00:28:50.035 16:20:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:50.035 16:20:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:50.035 ************************************ 00:28:50.035 END TEST nvmf_digest 00:28:50.035 ************************************ 00:28:50.296 16:20:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:50.296 16:20:25 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:28:50.296 16:20:25 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:28:50.296 16:20:25 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:28:50.296 16:20:25 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:50.296 16:20:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:50.296 16:20:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.296 16:20:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:50.296 ************************************ 00:28:50.296 START TEST nvmf_bdevperf 00:28:50.296 ************************************ 00:28:50.296 16:20:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:50.296 * Looking for test storage... 00:28:50.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:50.296 16:20:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:56.901 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:56.901 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:56.902 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:56.902 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:56.902 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.902 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:57.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:57.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:28:57.162 00:28:57.162 --- 10.0.0.2 ping statistics --- 00:28:57.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.162 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:57.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:57.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.467 ms 00:28:57.162 00:28:57.162 --- 10.0.0.1 ping statistics --- 00:28:57.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.162 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:57.162 16:20:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.423 16:20:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2469021 00:28:57.423 16:20:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2469021 00:28:57.423 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2469021 ']' 00:28:57.423 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.423 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:57.423 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.423 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:57.423 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.423 16:20:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:57.423 [2024-07-15 16:20:33.061546] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:28:57.423 [2024-07-15 16:20:33.061602] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.423 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.423 [2024-07-15 16:20:33.147222] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:57.423 [2024-07-15 16:20:33.241646] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.423 [2024-07-15 16:20:33.241700] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.423 [2024-07-15 16:20:33.241709] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.423 [2024-07-15 16:20:33.241716] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.423 [2024-07-15 16:20:33.241722] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.423 [2024-07-15 16:20:33.241857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.423 [2024-07-15 16:20:33.242031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.423 [2024-07-15 16:20:33.242032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.365 [2024-07-15 16:20:33.883524] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.365 Malloc0 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.365 [2024-07-15 16:20:33.956440] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:58.365 { 00:28:58.365 "params": { 00:28:58.365 "name": "Nvme$subsystem", 00:28:58.365 "trtype": "$TEST_TRANSPORT", 00:28:58.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.365 "adrfam": "ipv4", 00:28:58.365 "trsvcid": "$NVMF_PORT", 00:28:58.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.365 "hdgst": ${hdgst:-false}, 00:28:58.365 "ddgst": ${ddgst:-false} 00:28:58.365 }, 00:28:58.365 "method": "bdev_nvme_attach_controller" 00:28:58.365 } 00:28:58.365 EOF 00:28:58.365 )") 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:58.365 16:20:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:58.365 "params": { 00:28:58.365 "name": "Nvme1", 00:28:58.365 "trtype": "tcp", 00:28:58.365 "traddr": "10.0.0.2", 00:28:58.365 "adrfam": "ipv4", 00:28:58.365 "trsvcid": "4420", 00:28:58.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:58.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:58.365 "hdgst": false, 00:28:58.365 "ddgst": false 00:28:58.365 }, 00:28:58.365 "method": "bdev_nvme_attach_controller" 00:28:58.365 }' 00:28:58.365 [2024-07-15 16:20:34.007713] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:28:58.365 [2024-07-15 16:20:34.007768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469109 ] 00:28:58.365 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.365 [2024-07-15 16:20:34.066888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.365 [2024-07-15 16:20:34.132438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.626 Running I/O for 1 seconds... 00:29:00.009 00:29:00.009 Latency(us) 00:29:00.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.009 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:00.009 Verification LBA range: start 0x0 length 0x4000 00:29:00.009 Nvme1n1 : 1.01 8934.36 34.90 0.00 0.00 14236.75 1262.93 12888.75 00:29:00.009 =================================================================================================================== 00:29:00.009 Total : 8934.36 34.90 0.00 0.00 14236.75 1262.93 12888.75 00:29:00.009 16:20:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2469406 00:29:00.009 16:20:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:00.009 16:20:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:00.009 16:20:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:00.009 16:20:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:00.009 16:20:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:00.009 16:20:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.009 16:20:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.009 { 00:29:00.009 "params": { 00:29:00.009 "name": "Nvme$subsystem", 00:29:00.009 "trtype": "$TEST_TRANSPORT", 00:29:00.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.009 "adrfam": "ipv4", 00:29:00.009 "trsvcid": "$NVMF_PORT", 00:29:00.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.009 "hdgst": ${hdgst:-false}, 00:29:00.009 "ddgst": ${ddgst:-false} 00:29:00.009 }, 00:29:00.009 "method": "bdev_nvme_attach_controller" 00:29:00.009 } 00:29:00.009 EOF 00:29:00.009 )") 00:29:00.009 16:20:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:00.009 16:20:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:00.009 16:20:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:00.009 16:20:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:00.009 "params": { 00:29:00.009 "name": "Nvme1", 00:29:00.009 "trtype": "tcp", 00:29:00.009 "traddr": "10.0.0.2", 00:29:00.009 "adrfam": "ipv4", 00:29:00.009 "trsvcid": "4420", 00:29:00.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:00.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:00.009 "hdgst": false, 00:29:00.009 "ddgst": false 00:29:00.009 }, 00:29:00.009 "method": "bdev_nvme_attach_controller" 00:29:00.009 }' 00:29:00.009 [2024-07-15 16:20:35.628733] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:29:00.009 [2024-07-15 16:20:35.628787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469406 ] 00:29:00.009 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.009 [2024-07-15 16:20:35.688290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.009 [2024-07-15 16:20:35.752556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.270 Running I/O for 15 seconds... 00:29:02.816 16:20:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2469021 00:29:02.816 16:20:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:02.816 [2024-07-15 16:20:38.596248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.816 [2024-07-15 16:20:38.596292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.817 [2024-07-15 16:20:38.596938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.596987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.596996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.597003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.597012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.597020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.597029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.597037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.817 [2024-07-15 16:20:38.597046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.817 [2024-07-15 16:20:38.597053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.818 [2024-07-15 16:20:38.597070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.818 [2024-07-15 16:20:38.597086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.818 [2024-07-15 16:20:38.597102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.818 [2024-07-15 16:20:38.597119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.818 [2024-07-15 16:20:38.597521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.818 [2024-07-15 16:20:38.597763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.818 [2024-07-15 16:20:38.597769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.597778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.597786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.597795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.597802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.597811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.597818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.597827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.597834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.597844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.597851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.597860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.597868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.597877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.597884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.597894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.597901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.597910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.597917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.597926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.597933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.597943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.597950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.597959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.597966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.597976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.597983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.597992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.597999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.819 [2024-07-15 16:20:38.598083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.819 [2024-07-15 16:20:38.598100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.819 [2024-07-15 16:20:38.598116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.819 [2024-07-15 16:20:38.598135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.819 [2024-07-15 16:20:38.598152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.819 [2024-07-15 16:20:38.598168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.819 [2024-07-15 16:20:38.598183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.819 [2024-07-15 16:20:38.598430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170ba00 is same with the state(5) to be set 00:29:02.819 [2024-07-15 16:20:38.598447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:02.819 [2024-07-15 16:20:38.598453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:02.819 [2024-07-15 16:20:38.598460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92616 len:8 PRP1 0x0 PRP2 0x0 00:29:02.819 [2024-07-15 16:20:38.598469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.819 [2024-07-15 16:20:38.598507] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x170ba00 was disconnected and freed. reset controller. 00:29:02.820 [2024-07-15 16:20:38.598550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.820 [2024-07-15 16:20:38.598560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.820 [2024-07-15 16:20:38.598571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.820 [2024-07-15 16:20:38.598578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.820 [2024-07-15 16:20:38.598586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.820 [2024-07-15 16:20:38.598593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.820 [2024-07-15 16:20:38.598601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.820 [2024-07-15 16:20:38.598608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.820 [2024-07-15 16:20:38.598615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:02.820 [2024-07-15 16:20:38.602180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.820 [2024-07-15 16:20:38.602200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:02.820 [2024-07-15 16:20:38.603079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.820 [2024-07-15 16:20:38.603096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:02.820 [2024-07-15 16:20:38.603104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:02.820 [2024-07-15 16:20:38.603325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:02.820 [2024-07-15 16:20:38.603542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.820 [2024-07-15 16:20:38.603551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.820 [2024-07-15 16:20:38.603558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.820 [2024-07-15 16:20:38.607050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.820 [2024-07-15 16:20:38.616316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.820 [2024-07-15 16:20:38.617008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.820 [2024-07-15 16:20:38.617045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:02.820 [2024-07-15 16:20:38.617057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:02.820 [2024-07-15 16:20:38.617306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:02.820 [2024-07-15 16:20:38.617526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.820 [2024-07-15 16:20:38.617534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.820 [2024-07-15 16:20:38.617542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.820 [2024-07-15 16:20:38.621034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.820 [2024-07-15 16:20:38.630090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.820 [2024-07-15 16:20:38.630767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.820 [2024-07-15 16:20:38.630803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:02.820 [2024-07-15 16:20:38.630821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:02.820 [2024-07-15 16:20:38.631057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:02.820 [2024-07-15 16:20:38.631287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.820 [2024-07-15 16:20:38.631297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.820 [2024-07-15 16:20:38.631304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.820 [2024-07-15 16:20:38.634809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.820 [2024-07-15 16:20:38.643860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.820 [2024-07-15 16:20:38.644521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.820 [2024-07-15 16:20:38.644558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:02.820 [2024-07-15 16:20:38.644569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:02.820 [2024-07-15 16:20:38.644805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:02.820 [2024-07-15 16:20:38.645024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.820 [2024-07-15 16:20:38.645033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.820 [2024-07-15 16:20:38.645041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.820 [2024-07-15 16:20:38.648543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.082 [2024-07-15 16:20:38.657599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.082 [2024-07-15 16:20:38.658429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.082 [2024-07-15 16:20:38.658465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.082 [2024-07-15 16:20:38.658475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.082 [2024-07-15 16:20:38.658711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.082 [2024-07-15 16:20:38.658930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.082 [2024-07-15 16:20:38.658938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.082 [2024-07-15 16:20:38.658946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.082 [2024-07-15 16:20:38.662449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.082 [2024-07-15 16:20:38.671333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.082 [2024-07-15 16:20:38.672085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.082 [2024-07-15 16:20:38.672130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.082 [2024-07-15 16:20:38.672142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.082 [2024-07-15 16:20:38.672378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.082 [2024-07-15 16:20:38.672598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.082 [2024-07-15 16:20:38.672611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.082 [2024-07-15 16:20:38.672618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.082 [2024-07-15 16:20:38.676112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.082 [2024-07-15 16:20:38.685164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.082 [2024-07-15 16:20:38.685897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.082 [2024-07-15 16:20:38.685933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.082 [2024-07-15 16:20:38.685943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.082 [2024-07-15 16:20:38.686188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.082 [2024-07-15 16:20:38.686408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.082 [2024-07-15 16:20:38.686416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.082 [2024-07-15 16:20:38.686423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.082 [2024-07-15 16:20:38.689916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.082 [2024-07-15 16:20:38.698969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.082 [2024-07-15 16:20:38.699612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.082 [2024-07-15 16:20:38.699649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.082 [2024-07-15 16:20:38.699659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.082 [2024-07-15 16:20:38.699895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.082 [2024-07-15 16:20:38.700115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.082 [2024-07-15 16:20:38.700131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.082 [2024-07-15 16:20:38.700139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.082 [2024-07-15 16:20:38.703633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.082 [2024-07-15 16:20:38.712895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.082 [2024-07-15 16:20:38.713595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.082 [2024-07-15 16:20:38.713632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.082 [2024-07-15 16:20:38.713642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.082 [2024-07-15 16:20:38.713878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.082 [2024-07-15 16:20:38.714097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.082 [2024-07-15 16:20:38.714106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.082 [2024-07-15 16:20:38.714114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.082 [2024-07-15 16:20:38.717614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.082 [2024-07-15 16:20:38.726680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.082 [2024-07-15 16:20:38.727376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.082 [2024-07-15 16:20:38.727419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.082 [2024-07-15 16:20:38.727431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.082 [2024-07-15 16:20:38.727670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.082 [2024-07-15 16:20:38.727889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.082 [2024-07-15 16:20:38.727898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.082 [2024-07-15 16:20:38.727906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.082 [2024-07-15 16:20:38.731406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.082 [2024-07-15 16:20:38.740470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.082 [2024-07-15 16:20:38.741178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.082 [2024-07-15 16:20:38.741215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.082 [2024-07-15 16:20:38.741227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.082 [2024-07-15 16:20:38.741466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.083 [2024-07-15 16:20:38.741685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.083 [2024-07-15 16:20:38.741694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.083 [2024-07-15 16:20:38.741702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.083 [2024-07-15 16:20:38.745201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.083 [2024-07-15 16:20:38.754255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.083 [2024-07-15 16:20:38.754991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.083 [2024-07-15 16:20:38.755028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.083 [2024-07-15 16:20:38.755039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.083 [2024-07-15 16:20:38.755284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.083 [2024-07-15 16:20:38.755505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.083 [2024-07-15 16:20:38.755513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.083 [2024-07-15 16:20:38.755521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.083 [2024-07-15 16:20:38.759011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.083 [2024-07-15 16:20:38.768075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.083 [2024-07-15 16:20:38.768776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.083 [2024-07-15 16:20:38.768813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.083 [2024-07-15 16:20:38.768824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.083 [2024-07-15 16:20:38.769065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.083 [2024-07-15 16:20:38.769291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.083 [2024-07-15 16:20:38.769302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.083 [2024-07-15 16:20:38.769309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.083 [2024-07-15 16:20:38.772803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.083 [2024-07-15 16:20:38.781858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.083 [2024-07-15 16:20:38.782586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.083 [2024-07-15 16:20:38.782623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.083 [2024-07-15 16:20:38.782633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.083 [2024-07-15 16:20:38.782870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.083 [2024-07-15 16:20:38.783089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.083 [2024-07-15 16:20:38.783098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.083 [2024-07-15 16:20:38.783106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.083 [2024-07-15 16:20:38.786607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.083 [2024-07-15 16:20:38.795661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.083 [2024-07-15 16:20:38.796413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.083 [2024-07-15 16:20:38.796450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.083 [2024-07-15 16:20:38.796460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.083 [2024-07-15 16:20:38.796697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.083 [2024-07-15 16:20:38.796916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.083 [2024-07-15 16:20:38.796924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.083 [2024-07-15 16:20:38.796932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.083 [2024-07-15 16:20:38.800430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.083 [2024-07-15 16:20:38.809483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.083 [2024-07-15 16:20:38.810202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.083 [2024-07-15 16:20:38.810239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.083 [2024-07-15 16:20:38.810251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.083 [2024-07-15 16:20:38.810489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.083 [2024-07-15 16:20:38.810708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.083 [2024-07-15 16:20:38.810716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.083 [2024-07-15 16:20:38.810728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.083 [2024-07-15 16:20:38.814230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.083 [2024-07-15 16:20:38.823287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.083 [2024-07-15 16:20:38.823987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.083 [2024-07-15 16:20:38.824024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.083 [2024-07-15 16:20:38.824034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.083 [2024-07-15 16:20:38.824278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.083 [2024-07-15 16:20:38.824498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.083 [2024-07-15 16:20:38.824506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.083 [2024-07-15 16:20:38.824514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.083 [2024-07-15 16:20:38.828007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.083 [2024-07-15 16:20:38.837078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.083 [2024-07-15 16:20:38.837846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.083 [2024-07-15 16:20:38.837882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.083 [2024-07-15 16:20:38.837893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.083 [2024-07-15 16:20:38.838136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.083 [2024-07-15 16:20:38.838356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.083 [2024-07-15 16:20:38.838364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.083 [2024-07-15 16:20:38.838372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.083 [2024-07-15 16:20:38.841867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.083 [2024-07-15 16:20:38.850919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.083 [2024-07-15 16:20:38.851637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.083 [2024-07-15 16:20:38.851674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.083 [2024-07-15 16:20:38.851685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.083 [2024-07-15 16:20:38.851921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.083 [2024-07-15 16:20:38.852148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.083 [2024-07-15 16:20:38.852158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.083 [2024-07-15 16:20:38.852165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.083 [2024-07-15 16:20:38.855658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.083 [2024-07-15 16:20:38.864711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.083 [2024-07-15 16:20:38.865415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.083 [2024-07-15 16:20:38.865451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.083 [2024-07-15 16:20:38.865462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.083 [2024-07-15 16:20:38.865698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.083 [2024-07-15 16:20:38.865917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.083 [2024-07-15 16:20:38.865926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.083 [2024-07-15 16:20:38.865933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.083 [2024-07-15 16:20:38.869437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.083 [2024-07-15 16:20:38.878495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.083 [2024-07-15 16:20:38.879153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.083 [2024-07-15 16:20:38.879171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.083 [2024-07-15 16:20:38.879179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.083 [2024-07-15 16:20:38.879396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.083 [2024-07-15 16:20:38.879611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.083 [2024-07-15 16:20:38.879619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.083 [2024-07-15 16:20:38.879626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.083 [2024-07-15 16:20:38.883114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.083 [2024-07-15 16:20:38.892373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.083 [2024-07-15 16:20:38.893070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.083 [2024-07-15 16:20:38.893107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.083 [2024-07-15 16:20:38.893118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.084 [2024-07-15 16:20:38.893364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.084 [2024-07-15 16:20:38.893583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.084 [2024-07-15 16:20:38.893592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.084 [2024-07-15 16:20:38.893599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.084 [2024-07-15 16:20:38.897092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.084 [2024-07-15 16:20:38.906148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.084 [2024-07-15 16:20:38.906856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.084 [2024-07-15 16:20:38.906891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.084 [2024-07-15 16:20:38.906902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.084 [2024-07-15 16:20:38.907146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.084 [2024-07-15 16:20:38.907370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.084 [2024-07-15 16:20:38.907379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.084 [2024-07-15 16:20:38.907386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.084 [2024-07-15 16:20:38.910878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.084 [2024-07-15 16:20:38.919927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.084 [2024-07-15 16:20:38.920660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.084 [2024-07-15 16:20:38.920696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.084 [2024-07-15 16:20:38.920707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.084 [2024-07-15 16:20:38.920943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.084 [2024-07-15 16:20:38.921171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.084 [2024-07-15 16:20:38.921180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.084 [2024-07-15 16:20:38.921187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.349 [2024-07-15 16:20:38.924679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.349 [2024-07-15 16:20:38.933740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.349 [2024-07-15 16:20:38.934458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 16:20:38.934495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.349 [2024-07-15 16:20:38.934505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.349 [2024-07-15 16:20:38.934741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.349 [2024-07-15 16:20:38.934961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.349 [2024-07-15 16:20:38.934969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.349 [2024-07-15 16:20:38.934976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.349 [2024-07-15 16:20:38.938484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.349 [2024-07-15 16:20:38.947538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.349 [2024-07-15 16:20:38.948211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 16:20:38.948248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.349 [2024-07-15 16:20:38.948259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.349 [2024-07-15 16:20:38.948495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.349 [2024-07-15 16:20:38.948714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.349 [2024-07-15 16:20:38.948723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.349 [2024-07-15 16:20:38.948730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.349 [2024-07-15 16:20:38.952238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.349 [2024-07-15 16:20:38.961290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.349 [2024-07-15 16:20:38.961941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 16:20:38.961977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.349 [2024-07-15 16:20:38.961988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.349 [2024-07-15 16:20:38.962233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.349 [2024-07-15 16:20:38.962453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.350 [2024-07-15 16:20:38.962461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.350 [2024-07-15 16:20:38.962469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.350 [2024-07-15 16:20:38.965960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.350 [2024-07-15 16:20:38.975021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.350 [2024-07-15 16:20:38.975717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 16:20:38.975754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.350 [2024-07-15 16:20:38.975765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.350 [2024-07-15 16:20:38.976001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.350 [2024-07-15 16:20:38.976229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.350 [2024-07-15 16:20:38.976238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.350 [2024-07-15 16:20:38.976245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.350 [2024-07-15 16:20:38.979741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.350 [2024-07-15 16:20:38.988793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.350 [2024-07-15 16:20:38.989511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 16:20:38.989547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.350 [2024-07-15 16:20:38.989558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.350 [2024-07-15 16:20:38.989794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.350 [2024-07-15 16:20:38.990013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.350 [2024-07-15 16:20:38.990021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.350 [2024-07-15 16:20:38.990029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.350 [2024-07-15 16:20:38.993528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.350 [2024-07-15 16:20:39.002584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.350 [2024-07-15 16:20:39.003257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 16:20:39.003298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.350 [2024-07-15 16:20:39.003309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.350 [2024-07-15 16:20:39.003545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.350 [2024-07-15 16:20:39.003765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.350 [2024-07-15 16:20:39.003773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.350 [2024-07-15 16:20:39.003780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.350 [2024-07-15 16:20:39.007285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.350 [2024-07-15 16:20:39.016339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.350 [2024-07-15 16:20:39.016963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 16:20:39.016980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.350 [2024-07-15 16:20:39.016988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.350 [2024-07-15 16:20:39.017211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.350 [2024-07-15 16:20:39.017427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.350 [2024-07-15 16:20:39.017434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.350 [2024-07-15 16:20:39.017441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.350 [2024-07-15 16:20:39.020927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.350 [2024-07-15 16:20:39.030184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.350 [2024-07-15 16:20:39.030866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 16:20:39.030902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.350 [2024-07-15 16:20:39.030913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.350 [2024-07-15 16:20:39.031156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.350 [2024-07-15 16:20:39.031376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.350 [2024-07-15 16:20:39.031385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.350 [2024-07-15 16:20:39.031392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.350 [2024-07-15 16:20:39.034896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.350 [2024-07-15 16:20:39.043961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.350 [2024-07-15 16:20:39.044696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 16:20:39.044732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.350 [2024-07-15 16:20:39.044743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.350 [2024-07-15 16:20:39.044979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.350 [2024-07-15 16:20:39.045211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.350 [2024-07-15 16:20:39.045220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.350 [2024-07-15 16:20:39.045227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.350 [2024-07-15 16:20:39.048828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.350 [2024-07-15 16:20:39.057895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.350 [2024-07-15 16:20:39.058614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 16:20:39.058651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.350 [2024-07-15 16:20:39.058662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.350 [2024-07-15 16:20:39.058898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.350 [2024-07-15 16:20:39.059117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.350 [2024-07-15 16:20:39.059135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.350 [2024-07-15 16:20:39.059143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.350 [2024-07-15 16:20:39.062633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.350 [2024-07-15 16:20:39.071714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.350 [2024-07-15 16:20:39.072465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 16:20:39.072502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.350 [2024-07-15 16:20:39.072513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.350 [2024-07-15 16:20:39.072748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.350 [2024-07-15 16:20:39.072969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.350 [2024-07-15 16:20:39.072979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.350 [2024-07-15 16:20:39.072987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.350 [2024-07-15 16:20:39.076490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.350 [2024-07-15 16:20:39.085547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.350 [2024-07-15 16:20:39.086333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 16:20:39.086370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.350 [2024-07-15 16:20:39.086381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.350 [2024-07-15 16:20:39.086616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.350 [2024-07-15 16:20:39.086837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.350 [2024-07-15 16:20:39.086845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.350 [2024-07-15 16:20:39.086853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.350 [2024-07-15 16:20:39.090353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.350 [2024-07-15 16:20:39.099417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.350 [2024-07-15 16:20:39.100086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 16:20:39.100104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.350 [2024-07-15 16:20:39.100113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.350 [2024-07-15 16:20:39.100335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.350 [2024-07-15 16:20:39.100551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.350 [2024-07-15 16:20:39.100559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.350 [2024-07-15 16:20:39.100566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.350 [2024-07-15 16:20:39.104053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.350 [2024-07-15 16:20:39.113317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.350 [2024-07-15 16:20:39.114066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 16:20:39.114103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.350 [2024-07-15 16:20:39.114115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.350 [2024-07-15 16:20:39.114362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.351 [2024-07-15 16:20:39.114582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.351 [2024-07-15 16:20:39.114591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.351 [2024-07-15 16:20:39.114598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.351 [2024-07-15 16:20:39.118092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.351 [2024-07-15 16:20:39.127174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.351 [2024-07-15 16:20:39.127918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 16:20:39.127954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.351 [2024-07-15 16:20:39.127965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.351 [2024-07-15 16:20:39.128208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.351 [2024-07-15 16:20:39.128428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.351 [2024-07-15 16:20:39.128436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.351 [2024-07-15 16:20:39.128444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.351 [2024-07-15 16:20:39.131935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.351 [2024-07-15 16:20:39.141013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.351 [2024-07-15 16:20:39.141749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 16:20:39.141785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.351 [2024-07-15 16:20:39.141802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.351 [2024-07-15 16:20:39.142039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.351 [2024-07-15 16:20:39.142267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.351 [2024-07-15 16:20:39.142276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.351 [2024-07-15 16:20:39.142284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.351 [2024-07-15 16:20:39.145782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.351 [2024-07-15 16:20:39.154844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.351 [2024-07-15 16:20:39.155484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 16:20:39.155521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.351 [2024-07-15 16:20:39.155532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.351 [2024-07-15 16:20:39.155768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.351 [2024-07-15 16:20:39.155987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.351 [2024-07-15 16:20:39.155995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.351 [2024-07-15 16:20:39.156003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.351 [2024-07-15 16:20:39.159502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.351 [2024-07-15 16:20:39.168771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.351 [2024-07-15 16:20:39.169508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 16:20:39.169545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.351 [2024-07-15 16:20:39.169556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.351 [2024-07-15 16:20:39.169792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.351 [2024-07-15 16:20:39.170011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.351 [2024-07-15 16:20:39.170019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.351 [2024-07-15 16:20:39.170026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.351 [2024-07-15 16:20:39.173529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.351 [2024-07-15 16:20:39.182591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.351 [2024-07-15 16:20:39.183237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 16:20:39.183274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.351 [2024-07-15 16:20:39.183286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.351 [2024-07-15 16:20:39.183525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.351 [2024-07-15 16:20:39.183744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.351 [2024-07-15 16:20:39.183758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.351 [2024-07-15 16:20:39.183766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.351 [2024-07-15 16:20:39.187267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.701 [2024-07-15 16:20:39.196328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.701 [2024-07-15 16:20:39.196972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.701 [2024-07-15 16:20:39.196990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.701 [2024-07-15 16:20:39.196998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.701 [2024-07-15 16:20:39.197220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.701 [2024-07-15 16:20:39.197436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.701 [2024-07-15 16:20:39.197444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.701 [2024-07-15 16:20:39.197451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.701 [2024-07-15 16:20:39.200936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.701 [2024-07-15 16:20:39.210198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.701 [2024-07-15 16:20:39.210912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.701 [2024-07-15 16:20:39.210948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.701 [2024-07-15 16:20:39.210958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.701 [2024-07-15 16:20:39.211203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.701 [2024-07-15 16:20:39.211424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.701 [2024-07-15 16:20:39.211432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.701 [2024-07-15 16:20:39.211440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.701 [2024-07-15 16:20:39.214935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.701 [2024-07-15 16:20:39.223993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.701 [2024-07-15 16:20:39.224695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.701 [2024-07-15 16:20:39.224732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.701 [2024-07-15 16:20:39.224743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.701 [2024-07-15 16:20:39.224980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.701 [2024-07-15 16:20:39.225206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.701 [2024-07-15 16:20:39.225215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.701 [2024-07-15 16:20:39.225223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.701 [2024-07-15 16:20:39.228716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.701 [2024-07-15 16:20:39.237785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.701 [2024-07-15 16:20:39.238555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.701 [2024-07-15 16:20:39.238592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.701 [2024-07-15 16:20:39.238603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.701 [2024-07-15 16:20:39.238839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.701 [2024-07-15 16:20:39.239058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.701 [2024-07-15 16:20:39.239066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.701 [2024-07-15 16:20:39.239073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.701 [2024-07-15 16:20:39.242573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.701 [2024-07-15 16:20:39.251633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.701 [2024-07-15 16:20:39.252225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.701 [2024-07-15 16:20:39.252261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.701 [2024-07-15 16:20:39.252274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.701 [2024-07-15 16:20:39.252511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.701 [2024-07-15 16:20:39.252729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.702 [2024-07-15 16:20:39.252738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.702 [2024-07-15 16:20:39.252745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.702 [2024-07-15 16:20:39.256248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.702 [2024-07-15 16:20:39.265510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.702 [2024-07-15 16:20:39.266270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.702 [2024-07-15 16:20:39.266306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.702 [2024-07-15 16:20:39.266318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.702 [2024-07-15 16:20:39.266558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.702 [2024-07-15 16:20:39.266777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.702 [2024-07-15 16:20:39.266788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.702 [2024-07-15 16:20:39.266799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.702 [2024-07-15 16:20:39.270305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.702 [2024-07-15 16:20:39.279365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.702 [2024-07-15 16:20:39.280102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.702 [2024-07-15 16:20:39.280146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.702 [2024-07-15 16:20:39.280159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.702 [2024-07-15 16:20:39.280401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.702 [2024-07-15 16:20:39.280620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.702 [2024-07-15 16:20:39.280629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.702 [2024-07-15 16:20:39.280637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.702 [2024-07-15 16:20:39.284132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.702 [2024-07-15 16:20:39.293188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.702 [2024-07-15 16:20:39.293631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.702 [2024-07-15 16:20:39.293649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.702 [2024-07-15 16:20:39.293657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.702 [2024-07-15 16:20:39.293873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.702 [2024-07-15 16:20:39.294088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.702 [2024-07-15 16:20:39.294095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.702 [2024-07-15 16:20:39.294102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.702 [2024-07-15 16:20:39.297596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.702 [2024-07-15 16:20:39.307059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.702 [2024-07-15 16:20:39.307688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.702 [2024-07-15 16:20:39.307703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.702 [2024-07-15 16:20:39.307711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.702 [2024-07-15 16:20:39.307926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.702 [2024-07-15 16:20:39.308146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.702 [2024-07-15 16:20:39.308154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.702 [2024-07-15 16:20:39.308160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.702 [2024-07-15 16:20:39.311646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.702 [2024-07-15 16:20:39.320905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.702 [2024-07-15 16:20:39.321542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.702 [2024-07-15 16:20:39.321557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.702 [2024-07-15 16:20:39.321565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.702 [2024-07-15 16:20:39.321780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.702 [2024-07-15 16:20:39.321995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.702 [2024-07-15 16:20:39.322002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.702 [2024-07-15 16:20:39.322017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.702 [2024-07-15 16:20:39.325509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.702 [2024-07-15 16:20:39.334780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.702 [2024-07-15 16:20:39.335507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.702 [2024-07-15 16:20:39.335544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.702 [2024-07-15 16:20:39.335554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.702 [2024-07-15 16:20:39.335791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.702 [2024-07-15 16:20:39.336010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.702 [2024-07-15 16:20:39.336018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.702 [2024-07-15 16:20:39.336026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.702 [2024-07-15 16:20:39.339530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.702 [2024-07-15 16:20:39.348589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.702 [2024-07-15 16:20:39.349404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.702 [2024-07-15 16:20:39.349441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.702 [2024-07-15 16:20:39.349452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.702 [2024-07-15 16:20:39.349688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.702 [2024-07-15 16:20:39.349908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.702 [2024-07-15 16:20:39.349916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.702 [2024-07-15 16:20:39.349923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.702 [2024-07-15 16:20:39.353427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.702 [2024-07-15 16:20:39.362485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.702 [2024-07-15 16:20:39.362993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.702 [2024-07-15 16:20:39.363012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.702 [2024-07-15 16:20:39.363020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.702 [2024-07-15 16:20:39.363243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.702 [2024-07-15 16:20:39.363459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.702 [2024-07-15 16:20:39.363468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.702 [2024-07-15 16:20:39.363475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.702 [2024-07-15 16:20:39.366970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.702 [2024-07-15 16:20:39.376241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.702 [2024-07-15 16:20:39.376927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.702 [2024-07-15 16:20:39.376964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.702 [2024-07-15 16:20:39.376974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.702 [2024-07-15 16:20:39.377217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.702 [2024-07-15 16:20:39.377437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.702 [2024-07-15 16:20:39.377446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.702 [2024-07-15 16:20:39.377453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.702 [2024-07-15 16:20:39.380946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.702 [2024-07-15 16:20:39.390005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.702 [2024-07-15 16:20:39.390728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.702 [2024-07-15 16:20:39.390765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.702 [2024-07-15 16:20:39.390776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.702 [2024-07-15 16:20:39.391011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.702 [2024-07-15 16:20:39.391238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.702 [2024-07-15 16:20:39.391248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.703 [2024-07-15 16:20:39.391256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.703 [2024-07-15 16:20:39.394750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.703 [2024-07-15 16:20:39.403804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.703 [2024-07-15 16:20:39.404529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.703 [2024-07-15 16:20:39.404566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.703 [2024-07-15 16:20:39.404576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.703 [2024-07-15 16:20:39.404812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.703 [2024-07-15 16:20:39.405032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.703 [2024-07-15 16:20:39.405040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.703 [2024-07-15 16:20:39.405048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.703 [2024-07-15 16:20:39.408548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.703 [2024-07-15 16:20:39.417603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.703 [2024-07-15 16:20:39.418374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.703 [2024-07-15 16:20:39.418411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.703 [2024-07-15 16:20:39.418422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.703 [2024-07-15 16:20:39.418657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.703 [2024-07-15 16:20:39.418881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.703 [2024-07-15 16:20:39.418890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.703 [2024-07-15 16:20:39.418898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.703 [2024-07-15 16:20:39.422399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.703 [2024-07-15 16:20:39.431455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.703 [2024-07-15 16:20:39.432080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.703 [2024-07-15 16:20:39.432098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.703 [2024-07-15 16:20:39.432106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.703 [2024-07-15 16:20:39.432354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.703 [2024-07-15 16:20:39.432571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.703 [2024-07-15 16:20:39.432579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.703 [2024-07-15 16:20:39.432586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.703 [2024-07-15 16:20:39.436086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.703 [2024-07-15 16:20:39.445343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.703 [2024-07-15 16:20:39.445989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.703 [2024-07-15 16:20:39.446005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.703 [2024-07-15 16:20:39.446012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.703 [2024-07-15 16:20:39.446232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.703 [2024-07-15 16:20:39.446448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.703 [2024-07-15 16:20:39.446456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.703 [2024-07-15 16:20:39.446463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.703 [2024-07-15 16:20:39.449949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.703 [2024-07-15 16:20:39.459223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.703 [2024-07-15 16:20:39.459835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.703 [2024-07-15 16:20:39.459850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.703 [2024-07-15 16:20:39.459858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.703 [2024-07-15 16:20:39.460073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.703 [2024-07-15 16:20:39.460295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.703 [2024-07-15 16:20:39.460304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.703 [2024-07-15 16:20:39.460310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.703 [2024-07-15 16:20:39.463807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.703 [2024-07-15 16:20:39.473084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.703 [2024-07-15 16:20:39.473792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.703 [2024-07-15 16:20:39.473828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.703 [2024-07-15 16:20:39.473839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.703 [2024-07-15 16:20:39.474075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.703 [2024-07-15 16:20:39.474304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.703 [2024-07-15 16:20:39.474313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.703 [2024-07-15 16:20:39.474321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.703 [2024-07-15 16:20:39.477817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.703 [2024-07-15 16:20:39.486887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.703 [2024-07-15 16:20:39.487483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.703 [2024-07-15 16:20:39.487519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.703 [2024-07-15 16:20:39.487530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.703 [2024-07-15 16:20:39.487765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.703 [2024-07-15 16:20:39.487985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.703 [2024-07-15 16:20:39.487993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.703 [2024-07-15 16:20:39.488000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.703 [2024-07-15 16:20:39.491509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.703 [2024-07-15 16:20:39.500782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.703 [2024-07-15 16:20:39.501513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.703 [2024-07-15 16:20:39.501550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.703 [2024-07-15 16:20:39.501561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.703 [2024-07-15 16:20:39.501797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.703 [2024-07-15 16:20:39.502016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.703 [2024-07-15 16:20:39.502025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.703 [2024-07-15 16:20:39.502032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.703 [2024-07-15 16:20:39.505538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.703 [2024-07-15 16:20:39.514603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.703 [2024-07-15 16:20:39.515354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.703 [2024-07-15 16:20:39.515390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.703 [2024-07-15 16:20:39.515406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.703 [2024-07-15 16:20:39.515642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.703 [2024-07-15 16:20:39.515861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.703 [2024-07-15 16:20:39.515870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.703 [2024-07-15 16:20:39.515877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.703 [2024-07-15 16:20:39.519386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.965 [2024-07-15 16:20:39.528465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.965 [2024-07-15 16:20:39.529149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 16:20:39.529168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.965 [2024-07-15 16:20:39.529176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.965 [2024-07-15 16:20:39.529393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.965 [2024-07-15 16:20:39.529608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.965 [2024-07-15 16:20:39.529616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.965 [2024-07-15 16:20:39.529623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.965 [2024-07-15 16:20:39.533118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.965 [2024-07-15 16:20:39.542401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.965 [2024-07-15 16:20:39.543018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 16:20:39.543034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.965 [2024-07-15 16:20:39.543042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.965 [2024-07-15 16:20:39.543263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.965 [2024-07-15 16:20:39.543479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.965 [2024-07-15 16:20:39.543494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.965 [2024-07-15 16:20:39.543501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.965 [2024-07-15 16:20:39.546991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.965 [2024-07-15 16:20:39.556258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.965 [2024-07-15 16:20:39.556881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 16:20:39.556896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.965 [2024-07-15 16:20:39.556903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.965 [2024-07-15 16:20:39.557118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.965 [2024-07-15 16:20:39.557344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.965 [2024-07-15 16:20:39.557353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.965 [2024-07-15 16:20:39.557360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.965 [2024-07-15 16:20:39.560853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.965 [2024-07-15 16:20:39.570129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.965 [2024-07-15 16:20:39.570758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 16:20:39.570773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.965 [2024-07-15 16:20:39.570780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.965 [2024-07-15 16:20:39.570996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.965 [2024-07-15 16:20:39.571216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.965 [2024-07-15 16:20:39.571224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.965 [2024-07-15 16:20:39.571231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.965 [2024-07-15 16:20:39.574716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.965 [2024-07-15 16:20:39.583976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.965 [2024-07-15 16:20:39.584492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 16:20:39.584507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.965 [2024-07-15 16:20:39.584514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.965 [2024-07-15 16:20:39.584729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.965 [2024-07-15 16:20:39.584945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.965 [2024-07-15 16:20:39.584953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.965 [2024-07-15 16:20:39.584960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.965 [2024-07-15 16:20:39.588451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.965 [2024-07-15 16:20:39.597723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.965 [2024-07-15 16:20:39.598465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 16:20:39.598502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.965 [2024-07-15 16:20:39.598513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.965 [2024-07-15 16:20:39.598749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.965 [2024-07-15 16:20:39.598968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.965 [2024-07-15 16:20:39.598977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.965 [2024-07-15 16:20:39.598984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.965 [2024-07-15 16:20:39.602493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.965 [2024-07-15 16:20:39.611571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.965 [2024-07-15 16:20:39.612342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 16:20:39.612379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.965 [2024-07-15 16:20:39.612390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.965 [2024-07-15 16:20:39.612626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.965 [2024-07-15 16:20:39.612845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.965 [2024-07-15 16:20:39.612854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.965 [2024-07-15 16:20:39.612862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.965 [2024-07-15 16:20:39.616370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.965 [2024-07-15 16:20:39.625445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.965 [2024-07-15 16:20:39.626083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 16:20:39.626101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.965 [2024-07-15 16:20:39.626108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.965 [2024-07-15 16:20:39.626332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.965 [2024-07-15 16:20:39.626548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.965 [2024-07-15 16:20:39.626556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.965 [2024-07-15 16:20:39.626563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.965 [2024-07-15 16:20:39.630051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.965 [2024-07-15 16:20:39.639236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.965 [2024-07-15 16:20:39.639940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 16:20:39.639977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.965 [2024-07-15 16:20:39.639988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.965 [2024-07-15 16:20:39.640232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.965 [2024-07-15 16:20:39.640452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.965 [2024-07-15 16:20:39.640460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.965 [2024-07-15 16:20:39.640469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.965 [2024-07-15 16:20:39.643967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.965 [2024-07-15 16:20:39.653035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.965 [2024-07-15 16:20:39.653711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 16:20:39.653729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.965 [2024-07-15 16:20:39.653741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.965 [2024-07-15 16:20:39.653958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.965 [2024-07-15 16:20:39.654179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.965 [2024-07-15 16:20:39.654188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.965 [2024-07-15 16:20:39.654195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.965 [2024-07-15 16:20:39.657688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.965 [2024-07-15 16:20:39.666969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.965 [2024-07-15 16:20:39.667510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 16:20:39.667529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.965 [2024-07-15 16:20:39.667537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.965 [2024-07-15 16:20:39.667753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.965 [2024-07-15 16:20:39.667969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.965 [2024-07-15 16:20:39.667977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.965 [2024-07-15 16:20:39.667985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.965 [2024-07-15 16:20:39.671484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.965 [2024-07-15 16:20:39.680755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.965 [2024-07-15 16:20:39.681454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 16:20:39.681491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.965 [2024-07-15 16:20:39.681501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.965 [2024-07-15 16:20:39.681737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.965 [2024-07-15 16:20:39.681957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.965 [2024-07-15 16:20:39.681965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.965 [2024-07-15 16:20:39.681973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.965 [2024-07-15 16:20:39.685475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.965 [2024-07-15 16:20:39.694533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.965 [2024-07-15 16:20:39.695209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 16:20:39.695246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.965 [2024-07-15 16:20:39.695256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.965 [2024-07-15 16:20:39.695492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.965 [2024-07-15 16:20:39.695711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.965 [2024-07-15 16:20:39.695724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.965 [2024-07-15 16:20:39.695732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.965 [2024-07-15 16:20:39.699239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.965 [2024-07-15 16:20:39.708310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.965 [2024-07-15 16:20:39.708935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 16:20:39.708953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.965 [2024-07-15 16:20:39.708960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.965 [2024-07-15 16:20:39.709183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.965 [2024-07-15 16:20:39.709399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.965 [2024-07-15 16:20:39.709407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.965 [2024-07-15 16:20:39.709414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.965 [2024-07-15 16:20:39.712905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.965 [2024-07-15 16:20:39.722178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.966 [2024-07-15 16:20:39.722875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 16:20:39.722912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.966 [2024-07-15 16:20:39.722923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.966 [2024-07-15 16:20:39.723169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.966 [2024-07-15 16:20:39.723389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.966 [2024-07-15 16:20:39.723397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.966 [2024-07-15 16:20:39.723404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.966 [2024-07-15 16:20:39.726902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.966 [2024-07-15 16:20:39.735983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.966 [2024-07-15 16:20:39.736744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 16:20:39.736781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.966 [2024-07-15 16:20:39.736793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.966 [2024-07-15 16:20:39.737030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.966 [2024-07-15 16:20:39.737256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.966 [2024-07-15 16:20:39.737266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.966 [2024-07-15 16:20:39.737274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.966 [2024-07-15 16:20:39.740771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.966 [2024-07-15 16:20:39.749834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.966 [2024-07-15 16:20:39.750464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 16:20:39.750483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.966 [2024-07-15 16:20:39.750491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.966 [2024-07-15 16:20:39.750707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.966 [2024-07-15 16:20:39.750923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.966 [2024-07-15 16:20:39.750930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.966 [2024-07-15 16:20:39.750937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.966 [2024-07-15 16:20:39.754432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.966 [2024-07-15 16:20:39.763696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.966 [2024-07-15 16:20:39.764434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 16:20:39.764470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.966 [2024-07-15 16:20:39.764480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.966 [2024-07-15 16:20:39.764716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.966 [2024-07-15 16:20:39.764935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.966 [2024-07-15 16:20:39.764943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.966 [2024-07-15 16:20:39.764951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.966 [2024-07-15 16:20:39.768460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.966 [2024-07-15 16:20:39.777525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.966 [2024-07-15 16:20:39.778289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 16:20:39.778326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.966 [2024-07-15 16:20:39.778338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.966 [2024-07-15 16:20:39.778577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.966 [2024-07-15 16:20:39.778797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.966 [2024-07-15 16:20:39.778805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.966 [2024-07-15 16:20:39.778812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.966 [2024-07-15 16:20:39.782315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.966 [2024-07-15 16:20:39.791364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.966 [2024-07-15 16:20:39.792094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 16:20:39.792137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:03.966 [2024-07-15 16:20:39.792148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:03.966 [2024-07-15 16:20:39.792388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:03.966 [2024-07-15 16:20:39.792608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.966 [2024-07-15 16:20:39.792617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.966 [2024-07-15 16:20:39.792624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.966 [2024-07-15 16:20:39.796116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.966 [2024-07-15 16:20:39.805180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.227 [2024-07-15 16:20:39.805889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.227 [2024-07-15 16:20:39.805926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.227 [2024-07-15 16:20:39.805937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.227 [2024-07-15 16:20:39.806183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.227 [2024-07-15 16:20:39.806403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.227 [2024-07-15 16:20:39.806411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.227 [2024-07-15 16:20:39.806419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.227 [2024-07-15 16:20:39.809911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.227 [2024-07-15 16:20:39.818999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.227 [2024-07-15 16:20:39.819719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.227 [2024-07-15 16:20:39.819755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.227 [2024-07-15 16:20:39.819766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.227 [2024-07-15 16:20:39.820003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.227 [2024-07-15 16:20:39.820229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.227 [2024-07-15 16:20:39.820238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.227 [2024-07-15 16:20:39.820246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.227 [2024-07-15 16:20:39.823742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.227 [2024-07-15 16:20:39.832804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.227 [2024-07-15 16:20:39.833353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.227 [2024-07-15 16:20:39.833372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.227 [2024-07-15 16:20:39.833380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.227 [2024-07-15 16:20:39.833596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.227 [2024-07-15 16:20:39.833812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.227 [2024-07-15 16:20:39.833820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.227 [2024-07-15 16:20:39.833831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.227 [2024-07-15 16:20:39.837340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.227 [2024-07-15 16:20:39.846614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.227 [2024-07-15 16:20:39.847417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.227 [2024-07-15 16:20:39.847454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.227 [2024-07-15 16:20:39.847464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.227 [2024-07-15 16:20:39.847701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.227 [2024-07-15 16:20:39.847920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.227 [2024-07-15 16:20:39.847928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.227 [2024-07-15 16:20:39.847936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.227 [2024-07-15 16:20:39.851433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.227 [2024-07-15 16:20:39.860488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.227 [2024-07-15 16:20:39.861170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.227 [2024-07-15 16:20:39.861195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.227 [2024-07-15 16:20:39.861204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.227 [2024-07-15 16:20:39.861426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.227 [2024-07-15 16:20:39.861642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.227 [2024-07-15 16:20:39.861650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.227 [2024-07-15 16:20:39.861657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.227 [2024-07-15 16:20:39.865150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.227 [2024-07-15 16:20:39.874419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.227 [2024-07-15 16:20:39.875137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.227 [2024-07-15 16:20:39.875173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.227 [2024-07-15 16:20:39.875185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.227 [2024-07-15 16:20:39.875424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.227 [2024-07-15 16:20:39.875643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.227 [2024-07-15 16:20:39.875651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.227 [2024-07-15 16:20:39.875659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.227 [2024-07-15 16:20:39.879156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.227 [2024-07-15 16:20:39.888210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.227 [2024-07-15 16:20:39.888997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.227 [2024-07-15 16:20:39.889038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.227 [2024-07-15 16:20:39.889049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.227 [2024-07-15 16:20:39.889294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.227 [2024-07-15 16:20:39.889514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.227 [2024-07-15 16:20:39.889522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.227 [2024-07-15 16:20:39.889529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.227 [2024-07-15 16:20:39.893019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.227 [2024-07-15 16:20:39.902079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.227 [2024-07-15 16:20:39.902788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.227 [2024-07-15 16:20:39.902824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.227 [2024-07-15 16:20:39.902836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.227 [2024-07-15 16:20:39.903073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.227 [2024-07-15 16:20:39.903301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.227 [2024-07-15 16:20:39.903310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.227 [2024-07-15 16:20:39.903317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.227 [2024-07-15 16:20:39.906809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.227 [2024-07-15 16:20:39.915875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.227 [2024-07-15 16:20:39.916632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.227 [2024-07-15 16:20:39.916669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.227 [2024-07-15 16:20:39.916680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.227 [2024-07-15 16:20:39.916916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.227 [2024-07-15 16:20:39.917144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.227 [2024-07-15 16:20:39.917153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.227 [2024-07-15 16:20:39.917161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.227 [2024-07-15 16:20:39.920652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.227 [2024-07-15 16:20:39.929714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.227 [2024-07-15 16:20:39.930458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.227 [2024-07-15 16:20:39.930494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.227 [2024-07-15 16:20:39.930505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.227 [2024-07-15 16:20:39.930741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.227 [2024-07-15 16:20:39.930964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.227 [2024-07-15 16:20:39.930973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.227 [2024-07-15 16:20:39.930980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.227 [2024-07-15 16:20:39.934479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.227 [2024-07-15 16:20:39.943542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.227 [2024-07-15 16:20:39.944098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.227 [2024-07-15 16:20:39.944116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.227 [2024-07-15 16:20:39.944130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.227 [2024-07-15 16:20:39.944347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.227 [2024-07-15 16:20:39.944563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.227 [2024-07-15 16:20:39.944571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.227 [2024-07-15 16:20:39.944578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.227 [2024-07-15 16:20:39.948061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.227 [2024-07-15 16:20:39.957315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.227 [2024-07-15 16:20:39.957965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.227 [2024-07-15 16:20:39.957980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.228 [2024-07-15 16:20:39.957988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.228 [2024-07-15 16:20:39.958209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.228 [2024-07-15 16:20:39.958425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.228 [2024-07-15 16:20:39.958432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.228 [2024-07-15 16:20:39.958439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.228 [2024-07-15 16:20:39.961922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.228 [2024-07-15 16:20:39.971181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.228 [2024-07-15 16:20:39.971880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.228 [2024-07-15 16:20:39.971916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.228 [2024-07-15 16:20:39.971927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.228 [2024-07-15 16:20:39.972173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.228 [2024-07-15 16:20:39.972393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.228 [2024-07-15 16:20:39.972401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.228 [2024-07-15 16:20:39.972409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.228 [2024-07-15 16:20:39.975913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.228 [2024-07-15 16:20:39.984976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.228 [2024-07-15 16:20:39.985526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.228 [2024-07-15 16:20:39.985544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.228 [2024-07-15 16:20:39.985552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.228 [2024-07-15 16:20:39.985768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.228 [2024-07-15 16:20:39.985984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.228 [2024-07-15 16:20:39.985991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.228 [2024-07-15 16:20:39.985998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.228 [2024-07-15 16:20:39.989492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.228 [2024-07-15 16:20:39.998743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.228 [2024-07-15 16:20:39.999355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.228 [2024-07-15 16:20:39.999371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.228 [2024-07-15 16:20:39.999378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.228 [2024-07-15 16:20:39.999594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.228 [2024-07-15 16:20:39.999810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.228 [2024-07-15 16:20:39.999817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.228 [2024-07-15 16:20:39.999824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.228 [2024-07-15 16:20:40.003814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.228 [2024-07-15 16:20:40.012671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.228 [2024-07-15 16:20:40.013412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.228 [2024-07-15 16:20:40.013449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.228 [2024-07-15 16:20:40.013460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.228 [2024-07-15 16:20:40.013697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.228 [2024-07-15 16:20:40.013916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.228 [2024-07-15 16:20:40.013924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.228 [2024-07-15 16:20:40.013932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.228 [2024-07-15 16:20:40.017427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.228 [2024-07-15 16:20:40.026498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.228 [2024-07-15 16:20:40.027225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.228 [2024-07-15 16:20:40.027262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.228 [2024-07-15 16:20:40.027283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.228 [2024-07-15 16:20:40.027521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.228 [2024-07-15 16:20:40.027740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.228 [2024-07-15 16:20:40.027748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.228 [2024-07-15 16:20:40.027756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.228 [2024-07-15 16:20:40.031259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.228 [2024-07-15 16:20:40.040323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.228 [2024-07-15 16:20:40.041068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.228 [2024-07-15 16:20:40.041104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.228 [2024-07-15 16:20:40.041117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.228 [2024-07-15 16:20:40.041365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.228 [2024-07-15 16:20:40.041585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.228 [2024-07-15 16:20:40.041593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.228 [2024-07-15 16:20:40.041600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.228 [2024-07-15 16:20:40.045092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.228 [2024-07-15 16:20:40.054147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.228 [2024-07-15 16:20:40.054884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.228 [2024-07-15 16:20:40.054920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.228 [2024-07-15 16:20:40.054931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.228 [2024-07-15 16:20:40.055175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.228 [2024-07-15 16:20:40.055395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.228 [2024-07-15 16:20:40.055404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.228 [2024-07-15 16:20:40.055412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.228 [2024-07-15 16:20:40.058905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.488 [2024-07-15 16:20:40.068191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.488 [2024-07-15 16:20:40.068897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.488 [2024-07-15 16:20:40.068933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.488 [2024-07-15 16:20:40.068944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.489 [2024-07-15 16:20:40.069187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.489 [2024-07-15 16:20:40.069407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.489 [2024-07-15 16:20:40.069421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.489 [2024-07-15 16:20:40.069429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.489 [2024-07-15 16:20:40.072921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.489 [2024-07-15 16:20:40.081982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.489 [2024-07-15 16:20:40.082697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.489 [2024-07-15 16:20:40.082734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.489 [2024-07-15 16:20:40.082746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.489 [2024-07-15 16:20:40.082981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.489 [2024-07-15 16:20:40.083209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.489 [2024-07-15 16:20:40.083219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.489 [2024-07-15 16:20:40.083227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.489 [2024-07-15 16:20:40.086721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.489 [2024-07-15 16:20:40.095777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.489 [2024-07-15 16:20:40.096497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.489 [2024-07-15 16:20:40.096534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.489 [2024-07-15 16:20:40.096544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.489 [2024-07-15 16:20:40.096781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.489 [2024-07-15 16:20:40.097000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.489 [2024-07-15 16:20:40.097008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.489 [2024-07-15 16:20:40.097016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.489 [2024-07-15 16:20:40.100516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.489 [2024-07-15 16:20:40.109576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.489 [2024-07-15 16:20:40.110265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.489 [2024-07-15 16:20:40.110302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.489 [2024-07-15 16:20:40.110313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.489 [2024-07-15 16:20:40.110549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.489 [2024-07-15 16:20:40.110767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.489 [2024-07-15 16:20:40.110776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.489 [2024-07-15 16:20:40.110783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.489 [2024-07-15 16:20:40.114283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.489 [2024-07-15 16:20:40.123339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.489 [2024-07-15 16:20:40.123847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.489 [2024-07-15 16:20:40.123869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.489 [2024-07-15 16:20:40.123877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.489 [2024-07-15 16:20:40.124095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.489 [2024-07-15 16:20:40.124323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.489 [2024-07-15 16:20:40.124333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.489 [2024-07-15 16:20:40.124340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.489 [2024-07-15 16:20:40.127827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.489 [2024-07-15 16:20:40.137099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.489 [2024-07-15 16:20:40.137851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.489 [2024-07-15 16:20:40.137887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.489 [2024-07-15 16:20:40.137898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.489 [2024-07-15 16:20:40.138143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.489 [2024-07-15 16:20:40.138363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.489 [2024-07-15 16:20:40.138371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.489 [2024-07-15 16:20:40.138378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.489 [2024-07-15 16:20:40.141874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.489 [2024-07-15 16:20:40.150932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.489 [2024-07-15 16:20:40.151691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.489 [2024-07-15 16:20:40.151728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.489 [2024-07-15 16:20:40.151738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.489 [2024-07-15 16:20:40.151975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.489 [2024-07-15 16:20:40.152204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.489 [2024-07-15 16:20:40.152213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.489 [2024-07-15 16:20:40.152220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.489 [2024-07-15 16:20:40.155715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.489 [2024-07-15 16:20:40.164763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.489 [2024-07-15 16:20:40.165521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.489 [2024-07-15 16:20:40.165557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.489 [2024-07-15 16:20:40.165568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.489 [2024-07-15 16:20:40.165808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.489 [2024-07-15 16:20:40.166028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.489 [2024-07-15 16:20:40.166036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.489 [2024-07-15 16:20:40.166044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.489 [2024-07-15 16:20:40.169551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.489 [2024-07-15 16:20:40.178603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.489 [2024-07-15 16:20:40.179351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.489 [2024-07-15 16:20:40.179388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.489 [2024-07-15 16:20:40.179398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.489 [2024-07-15 16:20:40.179634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.489 [2024-07-15 16:20:40.179854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.489 [2024-07-15 16:20:40.179862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.489 [2024-07-15 16:20:40.179870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.489 [2024-07-15 16:20:40.183377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.489 [2024-07-15 16:20:40.192440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.489 [2024-07-15 16:20:40.193084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.489 [2024-07-15 16:20:40.193121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.489 [2024-07-15 16:20:40.193142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.489 [2024-07-15 16:20:40.193381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.489 [2024-07-15 16:20:40.193600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.489 [2024-07-15 16:20:40.193608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.489 [2024-07-15 16:20:40.193616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.489 [2024-07-15 16:20:40.197106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.489 [2024-07-15 16:20:40.206192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.489 [2024-07-15 16:20:40.206900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.489 [2024-07-15 16:20:40.206937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.489 [2024-07-15 16:20:40.206949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.489 [2024-07-15 16:20:40.207195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.489 [2024-07-15 16:20:40.207415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.489 [2024-07-15 16:20:40.207424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.489 [2024-07-15 16:20:40.207436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.489 [2024-07-15 16:20:40.210932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.489 [2024-07-15 16:20:40.219993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.489 [2024-07-15 16:20:40.220709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.489 [2024-07-15 16:20:40.220746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.489 [2024-07-15 16:20:40.220756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.489 [2024-07-15 16:20:40.220993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.489 [2024-07-15 16:20:40.221222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.489 [2024-07-15 16:20:40.221231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.489 [2024-07-15 16:20:40.221239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.489 [2024-07-15 16:20:40.224733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.489 [2024-07-15 16:20:40.233784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.489 [2024-07-15 16:20:40.234503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.489 [2024-07-15 16:20:40.234540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.489 [2024-07-15 16:20:40.234551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.489 [2024-07-15 16:20:40.234787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.489 [2024-07-15 16:20:40.235006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.489 [2024-07-15 16:20:40.235015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.489 [2024-07-15 16:20:40.235022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.489 [2024-07-15 16:20:40.238531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.489 [2024-07-15 16:20:40.247584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.489 [2024-07-15 16:20:40.248241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.489 [2024-07-15 16:20:40.248260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.489 [2024-07-15 16:20:40.248268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.489 [2024-07-15 16:20:40.248484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.489 [2024-07-15 16:20:40.248699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.489 [2024-07-15 16:20:40.248708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.489 [2024-07-15 16:20:40.248715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.489 [2024-07-15 16:20:40.252205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.489 [2024-07-15 16:20:40.261453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.489 [2024-07-15 16:20:40.262110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.489 [2024-07-15 16:20:40.262153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.489 [2024-07-15 16:20:40.262165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.489 [2024-07-15 16:20:40.262401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.489 [2024-07-15 16:20:40.262620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.489 [2024-07-15 16:20:40.262628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.489 [2024-07-15 16:20:40.262635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.489 [2024-07-15 16:20:40.266133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.489 [2024-07-15 16:20:40.275192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.489 [2024-07-15 16:20:40.275906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.489 [2024-07-15 16:20:40.275943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.490 [2024-07-15 16:20:40.275953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.490 [2024-07-15 16:20:40.276198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.490 [2024-07-15 16:20:40.276418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.490 [2024-07-15 16:20:40.276426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.490 [2024-07-15 16:20:40.276434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.490 [2024-07-15 16:20:40.279925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.490 [2024-07-15 16:20:40.288983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.490 [2024-07-15 16:20:40.289582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.490 [2024-07-15 16:20:40.289618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.490 [2024-07-15 16:20:40.289629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.490 [2024-07-15 16:20:40.289865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.490 [2024-07-15 16:20:40.290084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.490 [2024-07-15 16:20:40.290093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.490 [2024-07-15 16:20:40.290101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.490 [2024-07-15 16:20:40.293601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.490 [2024-07-15 16:20:40.302855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.490 [2024-07-15 16:20:40.303543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.490 [2024-07-15 16:20:40.303579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.490 [2024-07-15 16:20:40.303589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.490 [2024-07-15 16:20:40.303830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.490 [2024-07-15 16:20:40.304049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.490 [2024-07-15 16:20:40.304058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.490 [2024-07-15 16:20:40.304065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.490 [2024-07-15 16:20:40.307563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.490 [2024-07-15 16:20:40.316617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.490 [2024-07-15 16:20:40.317316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.490 [2024-07-15 16:20:40.317352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.490 [2024-07-15 16:20:40.317363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.490 [2024-07-15 16:20:40.317599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.490 [2024-07-15 16:20:40.317817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.490 [2024-07-15 16:20:40.317825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.490 [2024-07-15 16:20:40.317833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.490 [2024-07-15 16:20:40.321335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.752 [2024-07-15 16:20:40.330399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.752 [2024-07-15 16:20:40.330969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.752 [2024-07-15 16:20:40.331004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.752 [2024-07-15 16:20:40.331016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.752 [2024-07-15 16:20:40.331262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.752 [2024-07-15 16:20:40.331483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.752 [2024-07-15 16:20:40.331492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.752 [2024-07-15 16:20:40.331500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.752 [2024-07-15 16:20:40.334996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.752 [2024-07-15 16:20:40.344272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.752 [2024-07-15 16:20:40.344942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.752 [2024-07-15 16:20:40.344978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.752 [2024-07-15 16:20:40.344989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.752 [2024-07-15 16:20:40.345235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.752 [2024-07-15 16:20:40.345455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.752 [2024-07-15 16:20:40.345463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.752 [2024-07-15 16:20:40.345475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.752 [2024-07-15 16:20:40.348968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.752 [2024-07-15 16:20:40.358026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.752 [2024-07-15 16:20:40.358750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.752 [2024-07-15 16:20:40.358787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.752 [2024-07-15 16:20:40.358797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.752 [2024-07-15 16:20:40.359033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.752 [2024-07-15 16:20:40.359262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.752 [2024-07-15 16:20:40.359271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.752 [2024-07-15 16:20:40.359278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.752 [2024-07-15 16:20:40.362770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.752 [2024-07-15 16:20:40.371825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.752 [2024-07-15 16:20:40.372578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.752 [2024-07-15 16:20:40.372615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.752 [2024-07-15 16:20:40.372625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.752 [2024-07-15 16:20:40.372861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.752 [2024-07-15 16:20:40.373080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.752 [2024-07-15 16:20:40.373088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.752 [2024-07-15 16:20:40.373096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.752 [2024-07-15 16:20:40.376599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.752 [2024-07-15 16:20:40.385663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.752 [2024-07-15 16:20:40.386402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.752 [2024-07-15 16:20:40.386439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.752 [2024-07-15 16:20:40.386450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.752 [2024-07-15 16:20:40.386685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.752 [2024-07-15 16:20:40.386904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.752 [2024-07-15 16:20:40.386913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.752 [2024-07-15 16:20:40.386920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.752 [2024-07-15 16:20:40.390424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.752 [2024-07-15 16:20:40.399487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.752 [2024-07-15 16:20:40.400242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.752 [2024-07-15 16:20:40.400283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.752 [2024-07-15 16:20:40.400296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.752 [2024-07-15 16:20:40.400533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.752 [2024-07-15 16:20:40.400752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.752 [2024-07-15 16:20:40.400760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.752 [2024-07-15 16:20:40.400767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.752 [2024-07-15 16:20:40.404271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.752 [2024-07-15 16:20:40.413323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.752 [2024-07-15 16:20:40.413961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.752 [2024-07-15 16:20:40.413998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.752 [2024-07-15 16:20:40.414008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.752 [2024-07-15 16:20:40.414253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.752 [2024-07-15 16:20:40.414474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.752 [2024-07-15 16:20:40.414482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.752 [2024-07-15 16:20:40.414489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.752 [2024-07-15 16:20:40.417982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.752 [2024-07-15 16:20:40.427246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.752 [2024-07-15 16:20:40.428004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.752 [2024-07-15 16:20:40.428040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.752 [2024-07-15 16:20:40.428050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.752 [2024-07-15 16:20:40.428295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.752 [2024-07-15 16:20:40.428515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.752 [2024-07-15 16:20:40.428524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.752 [2024-07-15 16:20:40.428531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.752 [2024-07-15 16:20:40.432021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.752 [2024-07-15 16:20:40.441083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.752 [2024-07-15 16:20:40.441751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.752 [2024-07-15 16:20:40.441769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.752 [2024-07-15 16:20:40.441777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.752 [2024-07-15 16:20:40.441994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.752 [2024-07-15 16:20:40.442221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.752 [2024-07-15 16:20:40.442230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.753 [2024-07-15 16:20:40.442237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.753 [2024-07-15 16:20:40.445725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.753 [2024-07-15 16:20:40.454977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.753 [2024-07-15 16:20:40.455764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.753 [2024-07-15 16:20:40.455801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.753 [2024-07-15 16:20:40.455812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.753 [2024-07-15 16:20:40.456047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.753 [2024-07-15 16:20:40.456277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.753 [2024-07-15 16:20:40.456286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.753 [2024-07-15 16:20:40.456293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.753 [2024-07-15 16:20:40.459785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.753 [2024-07-15 16:20:40.468850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.753 [2024-07-15 16:20:40.469574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.753 [2024-07-15 16:20:40.469611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.753 [2024-07-15 16:20:40.469621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.753 [2024-07-15 16:20:40.469857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.753 [2024-07-15 16:20:40.470077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.753 [2024-07-15 16:20:40.470085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.753 [2024-07-15 16:20:40.470092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.753 [2024-07-15 16:20:40.473594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.753 [2024-07-15 16:20:40.482644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.753 [2024-07-15 16:20:40.483421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.753 [2024-07-15 16:20:40.483458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.753 [2024-07-15 16:20:40.483469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.753 [2024-07-15 16:20:40.483705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.753 [2024-07-15 16:20:40.483924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.753 [2024-07-15 16:20:40.483933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.753 [2024-07-15 16:20:40.483940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.753 [2024-07-15 16:20:40.487443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.753 [2024-07-15 16:20:40.496513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.753 [2024-07-15 16:20:40.497225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.753 [2024-07-15 16:20:40.497262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.753 [2024-07-15 16:20:40.497274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.753 [2024-07-15 16:20:40.497513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.753 [2024-07-15 16:20:40.497732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.753 [2024-07-15 16:20:40.497741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.753 [2024-07-15 16:20:40.497749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.753 [2024-07-15 16:20:40.501253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.753 [2024-07-15 16:20:40.510320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.753 [2024-07-15 16:20:40.511025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.753 [2024-07-15 16:20:40.511062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.753 [2024-07-15 16:20:40.511074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.753 [2024-07-15 16:20:40.511321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.753 [2024-07-15 16:20:40.511542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.753 [2024-07-15 16:20:40.511550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.753 [2024-07-15 16:20:40.511558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.753 [2024-07-15 16:20:40.515050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.753 [2024-07-15 16:20:40.524113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.753 [2024-07-15 16:20:40.524746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.753 [2024-07-15 16:20:40.524764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.753 [2024-07-15 16:20:40.524772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.753 [2024-07-15 16:20:40.524988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.753 [2024-07-15 16:20:40.525211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.753 [2024-07-15 16:20:40.525220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.753 [2024-07-15 16:20:40.525227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.753 [2024-07-15 16:20:40.528719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.753 [2024-07-15 16:20:40.537991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.753 [2024-07-15 16:20:40.538512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.753 [2024-07-15 16:20:40.538528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.753 [2024-07-15 16:20:40.538540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.753 [2024-07-15 16:20:40.538756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.753 [2024-07-15 16:20:40.538971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.753 [2024-07-15 16:20:40.538978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.753 [2024-07-15 16:20:40.538985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.753 [2024-07-15 16:20:40.542481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.753 [2024-07-15 16:20:40.551746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.753 [2024-07-15 16:20:40.552330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.753 [2024-07-15 16:20:40.552346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.753 [2024-07-15 16:20:40.552353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.753 [2024-07-15 16:20:40.552569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.753 [2024-07-15 16:20:40.552784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.753 [2024-07-15 16:20:40.552791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.753 [2024-07-15 16:20:40.552798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.753 [2024-07-15 16:20:40.556290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.753 [2024-07-15 16:20:40.565553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.753 [2024-07-15 16:20:40.566165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.753 [2024-07-15 16:20:40.566180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.753 [2024-07-15 16:20:40.566187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.753 [2024-07-15 16:20:40.566402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.753 [2024-07-15 16:20:40.566618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.753 [2024-07-15 16:20:40.566625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.753 [2024-07-15 16:20:40.566632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.753 [2024-07-15 16:20:40.570132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.753 [2024-07-15 16:20:40.579394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.753 [2024-07-15 16:20:40.580165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.753 [2024-07-15 16:20:40.580202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:04.753 [2024-07-15 16:20:40.580213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:04.753 [2024-07-15 16:20:40.580450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:04.753 [2024-07-15 16:20:40.580669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.753 [2024-07-15 16:20:40.580681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.753 [2024-07-15 16:20:40.580689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.753 [2024-07-15 16:20:40.584194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.014 [2024-07-15 16:20:40.593250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.014 [2024-07-15 16:20:40.593917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.014 [2024-07-15 16:20:40.593953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.014 [2024-07-15 16:20:40.593963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.014 [2024-07-15 16:20:40.594206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.014 [2024-07-15 16:20:40.594427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.014 [2024-07-15 16:20:40.594435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.014 [2024-07-15 16:20:40.594442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.014 [2024-07-15 16:20:40.597933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.014 [2024-07-15 16:20:40.606996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.014 [2024-07-15 16:20:40.607715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.014 [2024-07-15 16:20:40.607752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.014 [2024-07-15 16:20:40.607763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.014 [2024-07-15 16:20:40.607999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.014 [2024-07-15 16:20:40.608225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.014 [2024-07-15 16:20:40.608235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.014 [2024-07-15 16:20:40.608242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.014 [2024-07-15 16:20:40.611733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.014 [2024-07-15 16:20:40.620788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.014 [2024-07-15 16:20:40.621496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.014 [2024-07-15 16:20:40.621532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.014 [2024-07-15 16:20:40.621543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.014 [2024-07-15 16:20:40.621779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.014 [2024-07-15 16:20:40.621998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.014 [2024-07-15 16:20:40.622006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.015 [2024-07-15 16:20:40.622014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.015 [2024-07-15 16:20:40.625514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.015 [2024-07-15 16:20:40.634573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.015 [2024-07-15 16:20:40.635223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.015 [2024-07-15 16:20:40.635260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.015 [2024-07-15 16:20:40.635272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.015 [2024-07-15 16:20:40.635511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.015 [2024-07-15 16:20:40.635730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.015 [2024-07-15 16:20:40.635739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.015 [2024-07-15 16:20:40.635746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.015 [2024-07-15 16:20:40.639257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.015 [2024-07-15 16:20:40.648313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.015 [2024-07-15 16:20:40.649044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.015 [2024-07-15 16:20:40.649081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.015 [2024-07-15 16:20:40.649093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.015 [2024-07-15 16:20:40.649339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.015 [2024-07-15 16:20:40.649559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.015 [2024-07-15 16:20:40.649567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.015 [2024-07-15 16:20:40.649575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.015 [2024-07-15 16:20:40.653066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.015 [2024-07-15 16:20:40.662115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.015 [2024-07-15 16:20:40.662772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.015 [2024-07-15 16:20:40.662791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.015 [2024-07-15 16:20:40.662798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.015 [2024-07-15 16:20:40.663014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.015 [2024-07-15 16:20:40.663235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.015 [2024-07-15 16:20:40.663245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.015 [2024-07-15 16:20:40.663252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.015 [2024-07-15 16:20:40.666738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.015 [2024-07-15 16:20:40.675899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.015 [2024-07-15 16:20:40.676568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.015 [2024-07-15 16:20:40.676586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.015 [2024-07-15 16:20:40.676593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.015 [2024-07-15 16:20:40.676814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.015 [2024-07-15 16:20:40.677030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.015 [2024-07-15 16:20:40.677039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.015 [2024-07-15 16:20:40.677046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.015 [2024-07-15 16:20:40.680536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.015 [2024-07-15 16:20:40.689796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.015 [2024-07-15 16:20:40.690500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.015 [2024-07-15 16:20:40.690538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.015 [2024-07-15 16:20:40.690551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.015 [2024-07-15 16:20:40.690788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.015 [2024-07-15 16:20:40.691008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.015 [2024-07-15 16:20:40.691018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.015 [2024-07-15 16:20:40.691025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.015 [2024-07-15 16:20:40.694525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.015 [2024-07-15 16:20:40.703587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.015 [2024-07-15 16:20:40.704433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.015 [2024-07-15 16:20:40.704471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.015 [2024-07-15 16:20:40.704481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.015 [2024-07-15 16:20:40.704718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.015 [2024-07-15 16:20:40.704938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.015 [2024-07-15 16:20:40.704947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.015 [2024-07-15 16:20:40.704955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.015 [2024-07-15 16:20:40.708454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.015 [2024-07-15 16:20:40.717508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.015 [2024-07-15 16:20:40.718160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.015 [2024-07-15 16:20:40.718198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.015 [2024-07-15 16:20:40.718210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.015 [2024-07-15 16:20:40.718448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.015 [2024-07-15 16:20:40.718668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.015 [2024-07-15 16:20:40.718678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.015 [2024-07-15 16:20:40.718693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.015 [2024-07-15 16:20:40.722196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.015 [2024-07-15 16:20:40.731251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.015 [2024-07-15 16:20:40.731837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.015 [2024-07-15 16:20:40.731874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.015 [2024-07-15 16:20:40.731885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.015 [2024-07-15 16:20:40.732130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.015 [2024-07-15 16:20:40.732351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.015 [2024-07-15 16:20:40.732360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.015 [2024-07-15 16:20:40.732368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.015 [2024-07-15 16:20:40.735861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.015 [2024-07-15 16:20:40.745139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.015 [2024-07-15 16:20:40.745892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.015 [2024-07-15 16:20:40.745930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.015 [2024-07-15 16:20:40.745941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.015 [2024-07-15 16:20:40.746185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.015 [2024-07-15 16:20:40.746406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.015 [2024-07-15 16:20:40.746416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.015 [2024-07-15 16:20:40.746424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.015 [2024-07-15 16:20:40.749921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.015 [2024-07-15 16:20:40.758979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.015 [2024-07-15 16:20:40.759638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.015 [2024-07-15 16:20:40.759657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.015 [2024-07-15 16:20:40.759665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.015 [2024-07-15 16:20:40.759881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.015 [2024-07-15 16:20:40.760097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.016 [2024-07-15 16:20:40.760106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.016 [2024-07-15 16:20:40.760113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.016 [2024-07-15 16:20:40.763605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.016 [2024-07-15 16:20:40.772875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.016 [2024-07-15 16:20:40.773431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.016 [2024-07-15 16:20:40.773449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.016 [2024-07-15 16:20:40.773457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.016 [2024-07-15 16:20:40.773673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.016 [2024-07-15 16:20:40.773889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.016 [2024-07-15 16:20:40.773899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.016 [2024-07-15 16:20:40.773906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.016 [2024-07-15 16:20:40.777400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.016 [2024-07-15 16:20:40.786660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.016 [2024-07-15 16:20:40.787305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.016 [2024-07-15 16:20:40.787322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.016 [2024-07-15 16:20:40.787329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.016 [2024-07-15 16:20:40.787545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.016 [2024-07-15 16:20:40.787762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.016 [2024-07-15 16:20:40.787771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.016 [2024-07-15 16:20:40.787777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.016 [2024-07-15 16:20:40.791267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.016 [2024-07-15 16:20:40.800527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.016 [2024-07-15 16:20:40.801161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.016 [2024-07-15 16:20:40.801178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.016 [2024-07-15 16:20:40.801186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.016 [2024-07-15 16:20:40.801401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.016 [2024-07-15 16:20:40.801617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.016 [2024-07-15 16:20:40.801627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.016 [2024-07-15 16:20:40.801634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.016 [2024-07-15 16:20:40.805127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.016 [2024-07-15 16:20:40.814392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.016 [2024-07-15 16:20:40.814998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.016 [2024-07-15 16:20:40.815014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.016 [2024-07-15 16:20:40.815022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.016 [2024-07-15 16:20:40.815246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.016 [2024-07-15 16:20:40.815463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.016 [2024-07-15 16:20:40.815472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.016 [2024-07-15 16:20:40.815480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.016 [2024-07-15 16:20:40.818963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.016 [2024-07-15 16:20:40.828220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.016 [2024-07-15 16:20:40.828851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.016 [2024-07-15 16:20:40.828867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.016 [2024-07-15 16:20:40.828874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.016 [2024-07-15 16:20:40.829090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.016 [2024-07-15 16:20:40.829311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.016 [2024-07-15 16:20:40.829320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.016 [2024-07-15 16:20:40.829327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.016 [2024-07-15 16:20:40.832809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.016 [2024-07-15 16:20:40.842092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.016 [2024-07-15 16:20:40.842710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.016 [2024-07-15 16:20:40.842726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.016 [2024-07-15 16:20:40.842733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.016 [2024-07-15 16:20:40.842949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.016 [2024-07-15 16:20:40.843170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.016 [2024-07-15 16:20:40.843180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.016 [2024-07-15 16:20:40.843187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.016 [2024-07-15 16:20:40.846682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.278 [2024-07-15 16:20:40.855937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.278 [2024-07-15 16:20:40.856297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.278 [2024-07-15 16:20:40.856317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.278 [2024-07-15 16:20:40.856325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.278 [2024-07-15 16:20:40.856543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.278 [2024-07-15 16:20:40.856760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:40.856769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:40.856781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.279 [2024-07-15 16:20:40.860275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.279 [2024-07-15 16:20:40.869754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.279 [2024-07-15 16:20:40.870246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.279 [2024-07-15 16:20:40.870264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.279 [2024-07-15 16:20:40.870271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.279 [2024-07-15 16:20:40.870488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.279 [2024-07-15 16:20:40.870704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:40.870713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:40.870720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.279 [2024-07-15 16:20:40.874211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.279 [2024-07-15 16:20:40.883671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.279 [2024-07-15 16:20:40.884249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.279 [2024-07-15 16:20:40.884265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.279 [2024-07-15 16:20:40.884273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.279 [2024-07-15 16:20:40.884489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.279 [2024-07-15 16:20:40.884706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:40.884714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:40.884721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.279 [2024-07-15 16:20:40.888211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.279 [2024-07-15 16:20:40.897469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.279 [2024-07-15 16:20:40.897975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.279 [2024-07-15 16:20:40.897991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.279 [2024-07-15 16:20:40.897998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.279 [2024-07-15 16:20:40.898219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.279 [2024-07-15 16:20:40.898436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:40.898444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:40.898452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.279 [2024-07-15 16:20:40.901938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.279 [2024-07-15 16:20:40.911401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.279 [2024-07-15 16:20:40.912029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.279 [2024-07-15 16:20:40.912049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.279 [2024-07-15 16:20:40.912056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.279 [2024-07-15 16:20:40.912277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.279 [2024-07-15 16:20:40.912494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:40.912503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:40.912510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.279 [2024-07-15 16:20:40.915996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.279 [2024-07-15 16:20:40.925254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.279 [2024-07-15 16:20:40.925890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.279 [2024-07-15 16:20:40.925905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.279 [2024-07-15 16:20:40.925913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.279 [2024-07-15 16:20:40.926134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.279 [2024-07-15 16:20:40.926351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:40.926360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:40.926367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.279 [2024-07-15 16:20:40.929853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.279 [2024-07-15 16:20:40.939117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.279 [2024-07-15 16:20:40.939727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.279 [2024-07-15 16:20:40.939742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.279 [2024-07-15 16:20:40.939750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.279 [2024-07-15 16:20:40.939965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.279 [2024-07-15 16:20:40.940187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:40.940197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:40.940205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.279 [2024-07-15 16:20:40.943693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.279 [2024-07-15 16:20:40.952950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.279 [2024-07-15 16:20:40.953602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.279 [2024-07-15 16:20:40.953618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.279 [2024-07-15 16:20:40.953626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.279 [2024-07-15 16:20:40.953842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.279 [2024-07-15 16:20:40.954062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:40.954071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:40.954078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.279 [2024-07-15 16:20:40.957575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.279 [2024-07-15 16:20:40.966833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.279 [2024-07-15 16:20:40.967446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.279 [2024-07-15 16:20:40.967463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.279 [2024-07-15 16:20:40.967470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.279 [2024-07-15 16:20:40.967686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.279 [2024-07-15 16:20:40.967904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:40.967918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:40.967925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.279 [2024-07-15 16:20:40.971417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.279 [2024-07-15 16:20:40.980671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.279 [2024-07-15 16:20:40.981410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.279 [2024-07-15 16:20:40.981448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.279 [2024-07-15 16:20:40.981460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.279 [2024-07-15 16:20:40.981697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.279 [2024-07-15 16:20:40.981917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:40.981926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:40.981934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.279 [2024-07-15 16:20:40.985435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.279 [2024-07-15 16:20:40.994495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.279 [2024-07-15 16:20:40.995175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.279 [2024-07-15 16:20:40.995200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.279 [2024-07-15 16:20:40.995209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.279 [2024-07-15 16:20:40.995430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.279 [2024-07-15 16:20:40.995648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:40.995657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:40.995665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.279 [2024-07-15 16:20:40.999163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.279 [2024-07-15 16:20:41.008418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.279 [2024-07-15 16:20:41.009035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.279 [2024-07-15 16:20:41.009052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.279 [2024-07-15 16:20:41.009060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.279 [2024-07-15 16:20:41.009281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.279 [2024-07-15 16:20:41.009498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:41.009509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:41.009516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.279 [2024-07-15 16:20:41.013002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.279 [2024-07-15 16:20:41.022266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.279 [2024-07-15 16:20:41.022797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.279 [2024-07-15 16:20:41.022813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.279 [2024-07-15 16:20:41.022821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.279 [2024-07-15 16:20:41.023037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.279 [2024-07-15 16:20:41.023259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:41.023269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:41.023276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.279 [2024-07-15 16:20:41.026764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.279 [2024-07-15 16:20:41.036044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.279 [2024-07-15 16:20:41.036795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.279 [2024-07-15 16:20:41.036832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.279 [2024-07-15 16:20:41.036843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.279 [2024-07-15 16:20:41.037081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.279 [2024-07-15 16:20:41.037308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:41.037319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:41.037326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.279 [2024-07-15 16:20:41.040818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.279 [2024-07-15 16:20:41.049871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.279 [2024-07-15 16:20:41.050500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.279 [2024-07-15 16:20:41.050519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.279 [2024-07-15 16:20:41.050532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.279 [2024-07-15 16:20:41.050748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.279 [2024-07-15 16:20:41.050965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:41.050974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:41.050982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.279 [2024-07-15 16:20:41.054470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.279 [2024-07-15 16:20:41.063727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.279 [2024-07-15 16:20:41.064278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.279 [2024-07-15 16:20:41.064296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.279 [2024-07-15 16:20:41.064303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.279 [2024-07-15 16:20:41.064519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.279 [2024-07-15 16:20:41.064736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:41.064744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:41.064752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.279 [2024-07-15 16:20:41.068463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.279 [2024-07-15 16:20:41.077524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.279 [2024-07-15 16:20:41.078017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.279 [2024-07-15 16:20:41.078034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.279 [2024-07-15 16:20:41.078042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.279 [2024-07-15 16:20:41.078262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.279 [2024-07-15 16:20:41.078479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:41.078488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:41.078495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.279 [2024-07-15 16:20:41.081980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.279 [2024-07-15 16:20:41.091439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.279 [2024-07-15 16:20:41.092096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.279 [2024-07-15 16:20:41.092112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.279 [2024-07-15 16:20:41.092120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.279 [2024-07-15 16:20:41.092341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.279 [2024-07-15 16:20:41.092557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.279 [2024-07-15 16:20:41.092570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.279 [2024-07-15 16:20:41.092577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.280 [2024-07-15 16:20:41.096064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.280 [2024-07-15 16:20:41.105323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.280 [2024-07-15 16:20:41.106044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.280 [2024-07-15 16:20:41.106082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.280 [2024-07-15 16:20:41.106094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.280 [2024-07-15 16:20:41.106339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.280 [2024-07-15 16:20:41.106560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.280 [2024-07-15 16:20:41.106570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.280 [2024-07-15 16:20:41.106578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.280 [2024-07-15 16:20:41.110069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.541 [2024-07-15 16:20:41.119135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.541 [2024-07-15 16:20:41.119660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.541 [2024-07-15 16:20:41.119678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.541 [2024-07-15 16:20:41.119685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.541 [2024-07-15 16:20:41.119902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.541 [2024-07-15 16:20:41.120119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.541 [2024-07-15 16:20:41.120134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.541 [2024-07-15 16:20:41.120142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.541 [2024-07-15 16:20:41.123631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.541 [2024-07-15 16:20:41.132889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.541 [2024-07-15 16:20:41.133608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.133646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.133658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.133895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.134115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.542 [2024-07-15 16:20:41.134132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.542 [2024-07-15 16:20:41.134140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.542 [2024-07-15 16:20:41.137644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.542 [2024-07-15 16:20:41.146706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.542 [2024-07-15 16:20:41.147499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.147537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.147549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.147789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.148009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.542 [2024-07-15 16:20:41.148019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.542 [2024-07-15 16:20:41.148027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.542 [2024-07-15 16:20:41.151528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.542 [2024-07-15 16:20:41.160585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.542 [2024-07-15 16:20:41.161290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.161327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.161339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.161579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.161799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.542 [2024-07-15 16:20:41.161808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.542 [2024-07-15 16:20:41.161816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.542 [2024-07-15 16:20:41.165314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.542 [2024-07-15 16:20:41.174377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.542 [2024-07-15 16:20:41.175037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.175055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.175064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.175286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.175503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.542 [2024-07-15 16:20:41.175512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.542 [2024-07-15 16:20:41.175519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.542 [2024-07-15 16:20:41.179005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.542 [2024-07-15 16:20:41.188263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.542 [2024-07-15 16:20:41.188871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.188887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.188894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.189116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.189338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.542 [2024-07-15 16:20:41.189348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.542 [2024-07-15 16:20:41.189355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.542 [2024-07-15 16:20:41.192844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.542 [2024-07-15 16:20:41.202099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.542 [2024-07-15 16:20:41.202723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.202760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.202771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.203007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.203235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.542 [2024-07-15 16:20:41.203245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.542 [2024-07-15 16:20:41.203253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.542 [2024-07-15 16:20:41.206748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.542 [2024-07-15 16:20:41.216010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.542 [2024-07-15 16:20:41.216720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.216758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.216770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.217010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.217239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.542 [2024-07-15 16:20:41.217249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.542 [2024-07-15 16:20:41.217257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.542 [2024-07-15 16:20:41.220751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.542 [2024-07-15 16:20:41.229818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.542 [2024-07-15 16:20:41.230514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.230551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.230562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.230798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.231018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.542 [2024-07-15 16:20:41.231028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.542 [2024-07-15 16:20:41.231040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.542 [2024-07-15 16:20:41.234544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.542 [2024-07-15 16:20:41.243613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.542 [2024-07-15 16:20:41.244427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.244465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.244476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.244712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.244932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.542 [2024-07-15 16:20:41.244941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.542 [2024-07-15 16:20:41.244949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.542 [2024-07-15 16:20:41.248450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.542 [2024-07-15 16:20:41.257505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.542 [2024-07-15 16:20:41.258231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.258269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.258281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.258520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.258739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.542 [2024-07-15 16:20:41.258749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.542 [2024-07-15 16:20:41.258756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.542 [2024-07-15 16:20:41.262258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.542 [2024-07-15 16:20:41.271321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.542 [2024-07-15 16:20:41.272090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.272135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.272147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.272383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.272603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.542 [2024-07-15 16:20:41.272613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.542 [2024-07-15 16:20:41.272620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.542 [2024-07-15 16:20:41.276114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.542 [2024-07-15 16:20:41.285174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.542 [2024-07-15 16:20:41.285918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.285955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.285966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.286209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.286430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.542 [2024-07-15 16:20:41.286439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.542 [2024-07-15 16:20:41.286447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.542 [2024-07-15 16:20:41.289941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.542 [2024-07-15 16:20:41.299001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.542 [2024-07-15 16:20:41.299618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.299637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.299645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.299862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.300078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.542 [2024-07-15 16:20:41.300088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.542 [2024-07-15 16:20:41.300095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.542 [2024-07-15 16:20:41.303589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.542 [2024-07-15 16:20:41.312850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.542 [2024-07-15 16:20:41.313538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.313576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.313587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.313824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.314044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.542 [2024-07-15 16:20:41.314053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.542 [2024-07-15 16:20:41.314060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.542 [2024-07-15 16:20:41.317562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.542 [2024-07-15 16:20:41.326622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.542 [2024-07-15 16:20:41.327391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.327428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.327441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.327679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.327903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.542 [2024-07-15 16:20:41.327913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.542 [2024-07-15 16:20:41.327921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.542 [2024-07-15 16:20:41.331424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.542 [2024-07-15 16:20:41.340494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.542 [2024-07-15 16:20:41.341217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.341256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.341268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.341506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.341726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.542 [2024-07-15 16:20:41.341735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.542 [2024-07-15 16:20:41.341743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.542 [2024-07-15 16:20:41.345246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.542 [2024-07-15 16:20:41.354308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.542 [2024-07-15 16:20:41.354927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.354945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.354953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.355175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.355392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.542 [2024-07-15 16:20:41.355406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.542 [2024-07-15 16:20:41.355413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.542 [2024-07-15 16:20:41.358902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.542 [2024-07-15 16:20:41.368163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.542 [2024-07-15 16:20:41.368915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.542 [2024-07-15 16:20:41.368952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.542 [2024-07-15 16:20:41.368964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.542 [2024-07-15 16:20:41.369208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.542 [2024-07-15 16:20:41.369429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.543 [2024-07-15 16:20:41.369438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.543 [2024-07-15 16:20:41.369446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.543 [2024-07-15 16:20:41.372945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.804 [2024-07-15 16:20:41.382004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.804 [2024-07-15 16:20:41.382743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-15 16:20:41.382780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.804 [2024-07-15 16:20:41.382791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.804 [2024-07-15 16:20:41.383027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.804 [2024-07-15 16:20:41.383255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.804 [2024-07-15 16:20:41.383265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.804 [2024-07-15 16:20:41.383273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.804 [2024-07-15 16:20:41.386764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.804 [2024-07-15 16:20:41.395825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.804 [2024-07-15 16:20:41.396529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-15 16:20:41.396566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.804 [2024-07-15 16:20:41.396577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.804 [2024-07-15 16:20:41.396813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.804 [2024-07-15 16:20:41.397033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.804 [2024-07-15 16:20:41.397042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.804 [2024-07-15 16:20:41.397050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.804 [2024-07-15 16:20:41.400550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.804 [2024-07-15 16:20:41.409602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.804 [2024-07-15 16:20:41.410418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-15 16:20:41.410455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.804 [2024-07-15 16:20:41.410466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.804 [2024-07-15 16:20:41.410702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.804 [2024-07-15 16:20:41.410922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.804 [2024-07-15 16:20:41.410931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.804 [2024-07-15 16:20:41.410939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.804 [2024-07-15 16:20:41.414439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.804 [2024-07-15 16:20:41.423493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.804 [2024-07-15 16:20:41.424172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-15 16:20:41.424209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.804 [2024-07-15 16:20:41.424229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.804 [2024-07-15 16:20:41.424466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.804 [2024-07-15 16:20:41.424686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.804 [2024-07-15 16:20:41.424696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.804 [2024-07-15 16:20:41.424703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.804 [2024-07-15 16:20:41.428203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.804 [2024-07-15 16:20:41.437273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.804 [2024-07-15 16:20:41.437980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-15 16:20:41.438018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.804 [2024-07-15 16:20:41.438029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.804 [2024-07-15 16:20:41.438274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.804 [2024-07-15 16:20:41.438495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.804 [2024-07-15 16:20:41.438505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.804 [2024-07-15 16:20:41.438513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.804 [2024-07-15 16:20:41.442004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.804 [2024-07-15 16:20:41.451058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.804 [2024-07-15 16:20:41.451815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-15 16:20:41.451853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.804 [2024-07-15 16:20:41.451863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.804 [2024-07-15 16:20:41.452099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.804 [2024-07-15 16:20:41.452329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.804 [2024-07-15 16:20:41.452339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.804 [2024-07-15 16:20:41.452347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.804 [2024-07-15 16:20:41.455843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.804 [2024-07-15 16:20:41.464896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.804 [2024-07-15 16:20:41.465528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-15 16:20:41.465547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.804 [2024-07-15 16:20:41.465555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.804 [2024-07-15 16:20:41.465771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.804 [2024-07-15 16:20:41.465992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.804 [2024-07-15 16:20:41.466002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.804 [2024-07-15 16:20:41.466009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.804 [2024-07-15 16:20:41.469508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.804 [2024-07-15 16:20:41.478766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.804 [2024-07-15 16:20:41.479491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-15 16:20:41.479528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.804 [2024-07-15 16:20:41.479538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.804 [2024-07-15 16:20:41.479774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.804 [2024-07-15 16:20:41.479994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.804 [2024-07-15 16:20:41.480004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.804 [2024-07-15 16:20:41.480012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.804 [2024-07-15 16:20:41.483510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.804 [2024-07-15 16:20:41.492564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.804 [2024-07-15 16:20:41.493360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-15 16:20:41.493398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.804 [2024-07-15 16:20:41.493409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.804 [2024-07-15 16:20:41.493646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.804 [2024-07-15 16:20:41.493865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.804 [2024-07-15 16:20:41.493875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.804 [2024-07-15 16:20:41.493882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.804 [2024-07-15 16:20:41.497385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.804 [2024-07-15 16:20:41.506438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.804 [2024-07-15 16:20:41.507128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-15 16:20:41.507165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.804 [2024-07-15 16:20:41.507176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.804 [2024-07-15 16:20:41.507412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.804 [2024-07-15 16:20:41.507633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.804 [2024-07-15 16:20:41.507642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.804 [2024-07-15 16:20:41.507650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.804 [2024-07-15 16:20:41.511147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.804 [2024-07-15 16:20:41.520207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.804 [2024-07-15 16:20:41.520897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-15 16:20:41.520934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.804 [2024-07-15 16:20:41.520945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.804 [2024-07-15 16:20:41.521190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.804 [2024-07-15 16:20:41.521411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.804 [2024-07-15 16:20:41.521420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.804 [2024-07-15 16:20:41.521428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.804 [2024-07-15 16:20:41.524920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.804 [2024-07-15 16:20:41.533975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.804 [2024-07-15 16:20:41.534703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-15 16:20:41.534740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.804 [2024-07-15 16:20:41.534751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.804 [2024-07-15 16:20:41.534987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.804 [2024-07-15 16:20:41.535221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.804 [2024-07-15 16:20:41.535239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.804 [2024-07-15 16:20:41.535247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.804 [2024-07-15 16:20:41.538753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.804 [2024-07-15 16:20:41.547814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.804 [2024-07-15 16:20:41.548586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-15 16:20:41.548623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.804 [2024-07-15 16:20:41.548634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.804 [2024-07-15 16:20:41.548870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.804 [2024-07-15 16:20:41.549090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.804 [2024-07-15 16:20:41.549100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.804 [2024-07-15 16:20:41.549107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.804 [2024-07-15 16:20:41.552608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.804 [2024-07-15 16:20:41.561659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.804 [2024-07-15 16:20:41.562167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-15 16:20:41.562193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.804 [2024-07-15 16:20:41.562206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.804 [2024-07-15 16:20:41.562430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.804 [2024-07-15 16:20:41.562648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.804 [2024-07-15 16:20:41.562656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.804 [2024-07-15 16:20:41.562664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.804 [2024-07-15 16:20:41.566160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.804 [2024-07-15 16:20:41.575421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.804 [2024-07-15 16:20:41.576120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-15 16:20:41.576164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.804 [2024-07-15 16:20:41.576175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.804 [2024-07-15 16:20:41.576411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.804 [2024-07-15 16:20:41.576632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.804 [2024-07-15 16:20:41.576641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.805 [2024-07-15 16:20:41.576649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.805 [2024-07-15 16:20:41.580146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.805 [2024-07-15 16:20:41.589199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.805 [2024-07-15 16:20:41.589805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-15 16:20:41.589842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.805 [2024-07-15 16:20:41.589853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.805 [2024-07-15 16:20:41.590089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.805 [2024-07-15 16:20:41.590320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.805 [2024-07-15 16:20:41.590330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.805 [2024-07-15 16:20:41.590338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2469021 Killed "${NVMF_APP[@]}" "$@" 00:29:05.805 16:20:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:05.805 16:20:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:05.805 [2024-07-15 16:20:41.593831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.805 16:20:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:05.805 16:20:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:05.805 16:20:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:05.805 16:20:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2470727 00:29:05.805 16:20:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2470727 00:29:05.805 16:20:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:05.805 16:20:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2470727 ']' 00:29:05.805 16:20:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.805 16:20:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:05.805 16:20:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.805 [2024-07-15 16:20:41.603093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.805 16:20:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:05.805 16:20:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:05.805 [2024-07-15 16:20:41.603852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-15 16:20:41.603891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.805 [2024-07-15 16:20:41.603902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.805 [2024-07-15 16:20:41.604148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.805 [2024-07-15 16:20:41.604370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.805 [2024-07-15 16:20:41.604381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.805 [2024-07-15 16:20:41.604389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.805 [2024-07-15 16:20:41.607888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.805 [2024-07-15 16:20:41.616944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.805 [2024-07-15 16:20:41.617548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-15 16:20:41.617586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.805 [2024-07-15 16:20:41.617597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.805 [2024-07-15 16:20:41.617834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.805 [2024-07-15 16:20:41.618053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.805 [2024-07-15 16:20:41.618063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.805 [2024-07-15 16:20:41.618070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.805 [2024-07-15 16:20:41.621570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.805 [2024-07-15 16:20:41.630880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.805 [2024-07-15 16:20:41.631602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-15 16:20:41.631639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:05.805 [2024-07-15 16:20:41.631651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:05.805 [2024-07-15 16:20:41.631887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:05.805 [2024-07-15 16:20:41.632107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.805 [2024-07-15 16:20:41.632117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.805 [2024-07-15 16:20:41.632137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.805 [2024-07-15 16:20:41.635649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.067 [2024-07-15 16:20:41.644736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.067 [2024-07-15 16:20:41.645553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.067 [2024-07-15 16:20:41.645591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.067 [2024-07-15 16:20:41.645602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.067 [2024-07-15 16:20:41.645838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.067 [2024-07-15 16:20:41.646058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.067 [2024-07-15 16:20:41.646068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.067 [2024-07-15 16:20:41.646076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.067 [2024-07-15 16:20:41.649577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.067 [2024-07-15 16:20:41.650855] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:29:06.067 [2024-07-15 16:20:41.650901] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.067 [2024-07-15 16:20:41.658638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.067 [2024-07-15 16:20:41.659395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.067 [2024-07-15 16:20:41.659432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.067 [2024-07-15 16:20:41.659444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.067 [2024-07-15 16:20:41.659680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.067 [2024-07-15 16:20:41.659900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.067 [2024-07-15 16:20:41.659910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.067 [2024-07-15 16:20:41.659919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.067 [2024-07-15 16:20:41.663419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.067 [2024-07-15 16:20:41.672504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.067 [2024-07-15 16:20:41.673334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.067 [2024-07-15 16:20:41.673371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.067 [2024-07-15 16:20:41.673383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.067 [2024-07-15 16:20:41.673619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.067 [2024-07-15 16:20:41.673839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.067 [2024-07-15 16:20:41.673848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.067 [2024-07-15 16:20:41.673861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.067 [2024-07-15 16:20:41.677361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.067 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.067 [2024-07-15 16:20:41.686424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.067 [2024-07-15 16:20:41.687077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.067 [2024-07-15 16:20:41.687115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.067 [2024-07-15 16:20:41.687133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.067 [2024-07-15 16:20:41.687370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.067 [2024-07-15 16:20:41.687591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.067 [2024-07-15 16:20:41.687600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.067 [2024-07-15 16:20:41.687608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.067 [2024-07-15 16:20:41.691098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.067 [2024-07-15 16:20:41.700359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.067 [2024-07-15 16:20:41.700965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.067 [2024-07-15 16:20:41.701003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.067 [2024-07-15 16:20:41.701014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.067 [2024-07-15 16:20:41.701258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.067 [2024-07-15 16:20:41.701479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.067 [2024-07-15 16:20:41.701488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.067 [2024-07-15 16:20:41.701496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.067 [2024-07-15 16:20:41.704988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.067 [2024-07-15 16:20:41.714144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.067 [2024-07-15 16:20:41.714865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.067 [2024-07-15 16:20:41.714902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.067 [2024-07-15 16:20:41.714913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.067 [2024-07-15 16:20:41.715157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.067 [2024-07-15 16:20:41.715377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.067 [2024-07-15 16:20:41.715387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.067 [2024-07-15 16:20:41.715395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.067 [2024-07-15 16:20:41.718885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.067 [2024-07-15 16:20:41.727943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.067 [2024-07-15 16:20:41.728680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.067 [2024-07-15 16:20:41.728718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.067 [2024-07-15 16:20:41.728729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.067 [2024-07-15 16:20:41.728965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.067 [2024-07-15 16:20:41.729193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.067 [2024-07-15 16:20:41.729203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.067 [2024-07-15 16:20:41.729211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.067 [2024-07-15 16:20:41.730586] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:06.067 [2024-07-15 16:20:41.732706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.067 [2024-07-15 16:20:41.741781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.067 [2024-07-15 16:20:41.742614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.067 [2024-07-15 16:20:41.742652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.067 [2024-07-15 16:20:41.742664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.068 [2024-07-15 16:20:41.742900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.068 [2024-07-15 16:20:41.743121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.068 [2024-07-15 16:20:41.743139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.068 [2024-07-15 16:20:41.743147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.068 [2024-07-15 16:20:41.746641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.068 [2024-07-15 16:20:41.755703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.068 [2024-07-15 16:20:41.756462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.068 [2024-07-15 16:20:41.756500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.068 [2024-07-15 16:20:41.756511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.068 [2024-07-15 16:20:41.756748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.068 [2024-07-15 16:20:41.756968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.068 [2024-07-15 16:20:41.756978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.068 [2024-07-15 16:20:41.756986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.068 [2024-07-15 16:20:41.760490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.068 [2024-07-15 16:20:41.769567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.068 [2024-07-15 16:20:41.770265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.068 [2024-07-15 16:20:41.770303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.068 [2024-07-15 16:20:41.770317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.068 [2024-07-15 16:20:41.770567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.068 [2024-07-15 16:20:41.770787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.068 [2024-07-15 16:20:41.770798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.068 [2024-07-15 16:20:41.770806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.068 [2024-07-15 16:20:41.774309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.068 [2024-07-15 16:20:41.783368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.068 [2024-07-15 16:20:41.784149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.068 [2024-07-15 16:20:41.784186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.068 [2024-07-15 16:20:41.784199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.068 [2024-07-15 16:20:41.784245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.068 [2024-07-15 16:20:41.784267] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.068 [2024-07-15 16:20:41.784273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.068 [2024-07-15 16:20:41.784278] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.068 [2024-07-15 16:20:41.784283] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.068 [2024-07-15 16:20:41.784439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.068 [2024-07-15 16:20:41.784428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:06.068 [2024-07-15 16:20:41.784643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.068 [2024-07-15 16:20:41.784659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.068 [2024-07-15 16:20:41.784668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.068 [2024-07-15 16:20:41.784676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.068 [2024-07-15 16:20:41.784644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:06.068 [2024-07-15 16:20:41.788177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.068 [2024-07-15 16:20:41.797241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.068 [2024-07-15 16:20:41.797948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.068 [2024-07-15 16:20:41.797987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.068 [2024-07-15 16:20:41.797999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.068 [2024-07-15 16:20:41.798243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.068 [2024-07-15 16:20:41.798464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.068 [2024-07-15 16:20:41.798473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.068 [2024-07-15 16:20:41.798481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.068 [2024-07-15 16:20:41.801974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.068 [2024-07-15 16:20:41.811029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.068 [2024-07-15 16:20:41.811795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.068 [2024-07-15 16:20:41.811833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.068 [2024-07-15 16:20:41.811844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.068 [2024-07-15 16:20:41.812081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.068 [2024-07-15 16:20:41.812309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.068 [2024-07-15 16:20:41.812319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.068 [2024-07-15 16:20:41.812327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.068 [2024-07-15 16:20:41.815816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.068 [2024-07-15 16:20:41.824874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.068 [2024-07-15 16:20:41.825615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.068 [2024-07-15 16:20:41.825654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.068 [2024-07-15 16:20:41.825666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.068 [2024-07-15 16:20:41.825902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.068 [2024-07-15 16:20:41.826130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.068 [2024-07-15 16:20:41.826140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.068 [2024-07-15 16:20:41.826148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.068 [2024-07-15 16:20:41.829639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.068 [2024-07-15 16:20:41.838694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.068 [2024-07-15 16:20:41.839424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.068 [2024-07-15 16:20:41.839462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.068 [2024-07-15 16:20:41.839473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.068 [2024-07-15 16:20:41.839709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.068 [2024-07-15 16:20:41.839929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.068 [2024-07-15 16:20:41.839938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.068 [2024-07-15 16:20:41.839946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.068 [2024-07-15 16:20:41.843454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.068 [2024-07-15 16:20:41.852508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.068 [2024-07-15 16:20:41.853228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.068 [2024-07-15 16:20:41.853266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.068 [2024-07-15 16:20:41.853277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.068 [2024-07-15 16:20:41.853518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.068 [2024-07-15 16:20:41.853738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.068 [2024-07-15 16:20:41.853748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.068 [2024-07-15 16:20:41.853755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.068 [2024-07-15 16:20:41.857257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.068 [2024-07-15 16:20:41.866315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.068 [2024-07-15 16:20:41.867012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.068 [2024-07-15 16:20:41.867050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.068 [2024-07-15 16:20:41.867061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.068 [2024-07-15 16:20:41.867305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.068 [2024-07-15 16:20:41.867526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.068 [2024-07-15 16:20:41.867535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.068 [2024-07-15 16:20:41.867543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.068 [2024-07-15 16:20:41.871038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.068 [2024-07-15 16:20:41.880096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.068 [2024-07-15 16:20:41.880878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.068 [2024-07-15 16:20:41.880916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.068 [2024-07-15 16:20:41.880927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.068 [2024-07-15 16:20:41.881171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.068 [2024-07-15 16:20:41.881392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.068 [2024-07-15 16:20:41.881401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.068 [2024-07-15 16:20:41.881409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.068 [2024-07-15 16:20:41.884900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.068 [2024-07-15 16:20:41.893954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.068 [2024-07-15 16:20:41.894732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.068 [2024-07-15 16:20:41.894770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.068 [2024-07-15 16:20:41.894781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.068 [2024-07-15 16:20:41.895017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.068 [2024-07-15 16:20:41.895244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.068 [2024-07-15 16:20:41.895256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.068 [2024-07-15 16:20:41.895268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.068 [2024-07-15 16:20:41.898762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.329 [2024-07-15 16:20:41.907821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.329 [2024-07-15 16:20:41.908611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.329 [2024-07-15 16:20:41.908649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.329 [2024-07-15 16:20:41.908660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.329 [2024-07-15 16:20:41.908895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:41.909115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:41.909131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.330 [2024-07-15 16:20:41.909140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.330 [2024-07-15 16:20:41.912630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.330 [2024-07-15 16:20:41.921680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.330 [2024-07-15 16:20:41.922441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.330 [2024-07-15 16:20:41.922478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.330 [2024-07-15 16:20:41.922489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.330 [2024-07-15 16:20:41.922725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:41.922945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:41.922955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.330 [2024-07-15 16:20:41.922962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.330 [2024-07-15 16:20:41.926463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.330 [2024-07-15 16:20:41.935518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.330 [2024-07-15 16:20:41.936216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.330 [2024-07-15 16:20:41.936254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.330 [2024-07-15 16:20:41.936265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.330 [2024-07-15 16:20:41.936501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:41.936723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:41.936732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.330 [2024-07-15 16:20:41.936740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.330 [2024-07-15 16:20:41.940251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.330 [2024-07-15 16:20:41.949309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.330 [2024-07-15 16:20:41.950078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.330 [2024-07-15 16:20:41.950120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.330 [2024-07-15 16:20:41.950140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.330 [2024-07-15 16:20:41.950378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:41.950598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:41.950608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.330 [2024-07-15 16:20:41.950615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.330 [2024-07-15 16:20:41.954108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.330 [2024-07-15 16:20:41.963165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.330 [2024-07-15 16:20:41.963794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.330 [2024-07-15 16:20:41.963812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.330 [2024-07-15 16:20:41.963820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.330 [2024-07-15 16:20:41.964037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:41.964258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:41.964268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.330 [2024-07-15 16:20:41.964276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.330 [2024-07-15 16:20:41.967763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.330 [2024-07-15 16:20:41.977029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.330 [2024-07-15 16:20:41.977632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.330 [2024-07-15 16:20:41.977670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.330 [2024-07-15 16:20:41.977681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.330 [2024-07-15 16:20:41.977917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:41.978145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:41.978155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.330 [2024-07-15 16:20:41.978163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.330 [2024-07-15 16:20:41.981654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.330 [2024-07-15 16:20:41.990916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.330 [2024-07-15 16:20:41.991628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.330 [2024-07-15 16:20:41.991665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.330 [2024-07-15 16:20:41.991676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.330 [2024-07-15 16:20:41.991913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:41.992145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:41.992155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.330 [2024-07-15 16:20:41.992163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.330 [2024-07-15 16:20:41.995654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.330 [2024-07-15 16:20:42.004705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.330 [2024-07-15 16:20:42.005182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.330 [2024-07-15 16:20:42.005207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.330 [2024-07-15 16:20:42.005216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.330 [2024-07-15 16:20:42.005438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:42.005656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:42.005665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.330 [2024-07-15 16:20:42.005672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.330 [2024-07-15 16:20:42.009165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.330 [2024-07-15 16:20:42.018630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.330 [2024-07-15 16:20:42.019391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.330 [2024-07-15 16:20:42.019428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.330 [2024-07-15 16:20:42.019439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.330 [2024-07-15 16:20:42.019675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:42.019895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:42.019904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.330 [2024-07-15 16:20:42.019913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.330 [2024-07-15 16:20:42.023414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.330 [2024-07-15 16:20:42.032469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.330 [2024-07-15 16:20:42.033019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.330 [2024-07-15 16:20:42.033057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.330 [2024-07-15 16:20:42.033070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.330 [2024-07-15 16:20:42.033316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:42.033537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:42.033547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.330 [2024-07-15 16:20:42.033554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.330 [2024-07-15 16:20:42.037051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.330 [2024-07-15 16:20:42.046326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.330 [2024-07-15 16:20:42.047089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.330 [2024-07-15 16:20:42.047134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.330 [2024-07-15 16:20:42.047146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.330 [2024-07-15 16:20:42.047382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:42.047603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:42.047612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.330 [2024-07-15 16:20:42.047620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.330 [2024-07-15 16:20:42.051113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.330 [2024-07-15 16:20:42.060173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.330 [2024-07-15 16:20:42.060939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.330 [2024-07-15 16:20:42.060977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.330 [2024-07-15 16:20:42.060988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.330 [2024-07-15 16:20:42.061232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:42.061453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:42.061462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.330 [2024-07-15 16:20:42.061470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.330 [2024-07-15 16:20:42.064962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.330 [2024-07-15 16:20:42.074060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.330 [2024-07-15 16:20:42.074818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.330 [2024-07-15 16:20:42.074856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.330 [2024-07-15 16:20:42.074867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.330 [2024-07-15 16:20:42.075103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:42.075331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:42.075341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.330 [2024-07-15 16:20:42.075349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.330 [2024-07-15 16:20:42.078842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.330 [2024-07-15 16:20:42.087894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.330 [2024-07-15 16:20:42.088615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.330 [2024-07-15 16:20:42.088653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.330 [2024-07-15 16:20:42.088668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.330 [2024-07-15 16:20:42.088904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:42.089133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:42.089143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.330 [2024-07-15 16:20:42.089151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.330 [2024-07-15 16:20:42.092643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.330 [2024-07-15 16:20:42.101696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.330 [2024-07-15 16:20:42.102454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.330 [2024-07-15 16:20:42.102492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.330 [2024-07-15 16:20:42.102503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.330 [2024-07-15 16:20:42.102739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:42.102959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:42.102970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.330 [2024-07-15 16:20:42.102978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.330 [2024-07-15 16:20:42.106477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.330 [2024-07-15 16:20:42.115532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.330 [2024-07-15 16:20:42.116083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.330 [2024-07-15 16:20:42.116121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.330 [2024-07-15 16:20:42.116140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.330 [2024-07-15 16:20:42.116378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:42.116598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:42.116608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.330 [2024-07-15 16:20:42.116616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.330 [2024-07-15 16:20:42.120106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.330 [2024-07-15 16:20:42.129368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.330 [2024-07-15 16:20:42.130031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.330 [2024-07-15 16:20:42.130050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.330 [2024-07-15 16:20:42.130058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.330 [2024-07-15 16:20:42.130280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:42.130496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:42.130513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.330 [2024-07-15 16:20:42.130520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.330 [2024-07-15 16:20:42.134006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.330 [2024-07-15 16:20:42.143269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.330 [2024-07-15 16:20:42.143955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.330 [2024-07-15 16:20:42.143992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.330 [2024-07-15 16:20:42.144003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.330 [2024-07-15 16:20:42.144246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.330 [2024-07-15 16:20:42.144467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.330 [2024-07-15 16:20:42.144476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.331 [2024-07-15 16:20:42.144485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.331 [2024-07-15 16:20:42.147975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.331 [2024-07-15 16:20:42.157021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.331 [2024-07-15 16:20:42.157691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.331 [2024-07-15 16:20:42.157710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.331 [2024-07-15 16:20:42.157718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.331 [2024-07-15 16:20:42.157935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.331 [2024-07-15 16:20:42.158155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.331 [2024-07-15 16:20:42.158165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.331 [2024-07-15 16:20:42.158173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.331 [2024-07-15 16:20:42.161658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.592 [2024-07-15 16:20:42.170921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.592 [2024-07-15 16:20:42.171501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-07-15 16:20:42.171519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.592 [2024-07-15 16:20:42.171526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.592 [2024-07-15 16:20:42.171742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.592 [2024-07-15 16:20:42.171959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.592 [2024-07-15 16:20:42.171968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.592 [2024-07-15 16:20:42.171975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.592 [2024-07-15 16:20:42.175467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.592 [2024-07-15 16:20:42.184722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.592 [2024-07-15 16:20:42.185467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-07-15 16:20:42.185505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.592 [2024-07-15 16:20:42.185516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.592 [2024-07-15 16:20:42.185752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.592 [2024-07-15 16:20:42.185972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.592 [2024-07-15 16:20:42.185982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.592 [2024-07-15 16:20:42.185990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.592 [2024-07-15 16:20:42.189489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.592 [2024-07-15 16:20:42.198540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.592 [2024-07-15 16:20:42.199376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-07-15 16:20:42.199414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.592 [2024-07-15 16:20:42.199425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.592 [2024-07-15 16:20:42.199661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.592 [2024-07-15 16:20:42.199881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.592 [2024-07-15 16:20:42.199891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.592 [2024-07-15 16:20:42.199899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.592 [2024-07-15 16:20:42.203395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.592 [2024-07-15 16:20:42.212442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.592 [2024-07-15 16:20:42.213213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-07-15 16:20:42.213251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.592 [2024-07-15 16:20:42.213263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.592 [2024-07-15 16:20:42.213501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.592 [2024-07-15 16:20:42.213720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.592 [2024-07-15 16:20:42.213729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.592 [2024-07-15 16:20:42.213738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.592 [2024-07-15 16:20:42.217236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.592 [2024-07-15 16:20:42.226287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.592 [2024-07-15 16:20:42.226908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-07-15 16:20:42.226927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.592 [2024-07-15 16:20:42.226935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.592 [2024-07-15 16:20:42.227161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.592 [2024-07-15 16:20:42.227378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.593 [2024-07-15 16:20:42.227388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.593 [2024-07-15 16:20:42.227395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.593 [2024-07-15 16:20:42.230879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.593 [2024-07-15 16:20:42.240139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.593 [2024-07-15 16:20:42.240889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-07-15 16:20:42.240927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.593 [2024-07-15 16:20:42.240938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.593 [2024-07-15 16:20:42.241181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.593 [2024-07-15 16:20:42.241402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.593 [2024-07-15 16:20:42.241411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.593 [2024-07-15 16:20:42.241419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.593 [2024-07-15 16:20:42.244909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.593 [2024-07-15 16:20:42.253992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.593 [2024-07-15 16:20:42.254669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-07-15 16:20:42.254709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.593 [2024-07-15 16:20:42.254720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.593 [2024-07-15 16:20:42.254956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.593 [2024-07-15 16:20:42.255183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.593 [2024-07-15 16:20:42.255193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.593 [2024-07-15 16:20:42.255201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.593 [2024-07-15 16:20:42.258691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.593 [2024-07-15 16:20:42.267744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.593 [2024-07-15 16:20:42.268489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-07-15 16:20:42.268526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.593 [2024-07-15 16:20:42.268537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.593 [2024-07-15 16:20:42.268777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.593 [2024-07-15 16:20:42.269000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.593 [2024-07-15 16:20:42.269008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.593 [2024-07-15 16:20:42.269021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.593 [2024-07-15 16:20:42.272518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.593 [2024-07-15 16:20:42.281571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.593 [2024-07-15 16:20:42.282243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-07-15 16:20:42.282261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.593 [2024-07-15 16:20:42.282269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.593 [2024-07-15 16:20:42.282486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.593 [2024-07-15 16:20:42.282702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.593 [2024-07-15 16:20:42.282712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.593 [2024-07-15 16:20:42.282719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.593 [2024-07-15 16:20:42.286209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.593 [2024-07-15 16:20:42.295465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.593 [2024-07-15 16:20:42.296121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-07-15 16:20:42.296141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.593 [2024-07-15 16:20:42.296149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.593 [2024-07-15 16:20:42.296365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.593 [2024-07-15 16:20:42.296581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.593 [2024-07-15 16:20:42.296589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.593 [2024-07-15 16:20:42.296596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.593 [2024-07-15 16:20:42.300081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.593 [2024-07-15 16:20:42.309337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.593 [2024-07-15 16:20:42.309767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-07-15 16:20:42.309783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.593 [2024-07-15 16:20:42.309790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.593 [2024-07-15 16:20:42.310006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.593 [2024-07-15 16:20:42.310227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.593 [2024-07-15 16:20:42.310235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.593 [2024-07-15 16:20:42.310242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.593 [2024-07-15 16:20:42.313755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.593 [2024-07-15 16:20:42.323216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.593 [2024-07-15 16:20:42.323919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-07-15 16:20:42.323955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.593 [2024-07-15 16:20:42.323967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.593 [2024-07-15 16:20:42.324210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.593 [2024-07-15 16:20:42.324430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.593 [2024-07-15 16:20:42.324439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.593 [2024-07-15 16:20:42.324446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.593 [2024-07-15 16:20:42.327938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.593 [2024-07-15 16:20:42.336992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.593 [2024-07-15 16:20:42.337762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-07-15 16:20:42.337798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.593 [2024-07-15 16:20:42.337809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.593 [2024-07-15 16:20:42.338045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.593 [2024-07-15 16:20:42.338273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.593 [2024-07-15 16:20:42.338282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.593 [2024-07-15 16:20:42.338291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.593 [2024-07-15 16:20:42.341795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.593 [2024-07-15 16:20:42.350853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.593 [2024-07-15 16:20:42.351606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-07-15 16:20:42.351643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.593 [2024-07-15 16:20:42.351654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.593 [2024-07-15 16:20:42.351891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.593 [2024-07-15 16:20:42.352111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.593 [2024-07-15 16:20:42.352119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.593 [2024-07-15 16:20:42.352136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.593 [2024-07-15 16:20:42.355629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.593 [2024-07-15 16:20:42.364681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.593 [2024-07-15 16:20:42.365443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-07-15 16:20:42.365479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.593 [2024-07-15 16:20:42.365490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.593 [2024-07-15 16:20:42.365730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.593 [2024-07-15 16:20:42.365950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.593 [2024-07-15 16:20:42.365959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.593 [2024-07-15 16:20:42.365967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.593 [2024-07-15 16:20:42.369471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.593 [2024-07-15 16:20:42.378530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.593 [2024-07-15 16:20:42.379219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-07-15 16:20:42.379256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.594 [2024-07-15 16:20:42.379268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.594 [2024-07-15 16:20:42.379508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.594 [2024-07-15 16:20:42.379728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.594 [2024-07-15 16:20:42.379737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.594 [2024-07-15 16:20:42.379744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.594 [2024-07-15 16:20:42.383246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.594 [2024-07-15 16:20:42.392302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.594 [2024-07-15 16:20:42.393068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-07-15 16:20:42.393104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.594 [2024-07-15 16:20:42.393117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.594 [2024-07-15 16:20:42.393364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.594 [2024-07-15 16:20:42.393583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.594 [2024-07-15 16:20:42.393591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.594 [2024-07-15 16:20:42.393599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.594 [2024-07-15 16:20:42.397091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.594 [2024-07-15 16:20:42.406149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.594 [2024-07-15 16:20:42.406916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-07-15 16:20:42.406953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.594 [2024-07-15 16:20:42.406965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.594 [2024-07-15 16:20:42.407210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.594 [2024-07-15 16:20:42.407430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.594 [2024-07-15 16:20:42.407439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.594 [2024-07-15 16:20:42.407452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.594 [2024-07-15 16:20:42.410945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.594 [2024-07-15 16:20:42.420004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.594 [2024-07-15 16:20:42.420749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-07-15 16:20:42.420787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.594 [2024-07-15 16:20:42.420798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.594 [2024-07-15 16:20:42.421034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.594 [2024-07-15 16:20:42.421262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.594 [2024-07-15 16:20:42.421271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.594 [2024-07-15 16:20:42.421278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.594 [2024-07-15 16:20:42.424772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.594 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:06.594 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:06.594 16:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:06.594 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:06.594 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.858 [2024-07-15 16:20:42.433831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.858 [2024-07-15 16:20:42.434566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.858 [2024-07-15 16:20:42.434603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.858 [2024-07-15 16:20:42.434615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.858 [2024-07-15 16:20:42.434852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.858 [2024-07-15 16:20:42.435071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.858 [2024-07-15 16:20:42.435082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.858 [2024-07-15 16:20:42.435090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.858 [2024-07-15 16:20:42.438594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.858 [2024-07-15 16:20:42.447689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.858 [2024-07-15 16:20:42.448493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.858 [2024-07-15 16:20:42.448530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.858 [2024-07-15 16:20:42.448541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.858 [2024-07-15 16:20:42.448780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.858 [2024-07-15 16:20:42.448999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.858 [2024-07-15 16:20:42.449009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.858 [2024-07-15 16:20:42.449018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.858 [2024-07-15 16:20:42.452529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.858 [2024-07-15 16:20:42.461587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.858 [2024-07-15 16:20:42.462140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.858 [2024-07-15 16:20:42.462178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.858 [2024-07-15 16:20:42.462191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.858 [2024-07-15 16:20:42.462429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.858 [2024-07-15 16:20:42.462649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.858 [2024-07-15 16:20:42.462658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.858 [2024-07-15 16:20:42.462665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.858 [2024-07-15 16:20:42.466167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.858 16:20:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.858 16:20:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:06.858 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.858 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.858 [2024-07-15 16:20:42.471017] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.858 [2024-07-15 16:20:42.475433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.858 [2024-07-15 16:20:42.475977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.858 [2024-07-15 16:20:42.475994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.858 [2024-07-15 16:20:42.476002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.858 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.858 [2024-07-15 16:20:42.476225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.858 [2024-07-15 16:20:42.476441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.858 [2024-07-15 16:20:42.476449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.858 [2024-07-15 16:20:42.476456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.858 16:20:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:06.858 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.858 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.858 [2024-07-15 16:20:42.479940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.858 [2024-07-15 16:20:42.489226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.858 [2024-07-15 16:20:42.489987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.858 [2024-07-15 16:20:42.490023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.858 [2024-07-15 16:20:42.490035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.858 [2024-07-15 16:20:42.490284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.858 [2024-07-15 16:20:42.490508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.858 [2024-07-15 16:20:42.490517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.858 [2024-07-15 16:20:42.490525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.858 [2024-07-15 16:20:42.494014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.858 [2024-07-15 16:20:42.503073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.858 Malloc0 00:29:06.858 [2024-07-15 16:20:42.503718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.858 [2024-07-15 16:20:42.503754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.858 [2024-07-15 16:20:42.503765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.858 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.858 [2024-07-15 16:20:42.504001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.858 [2024-07-15 16:20:42.504228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.858 [2024-07-15 16:20:42.504238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.858 [2024-07-15 16:20:42.504247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.858 16:20:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:06.858 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.858 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.858 [2024-07-15 16:20:42.507739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.858 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.858 16:20:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:06.858 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.858 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.858 [2024-07-15 16:20:42.517003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.858 [2024-07-15 16:20:42.517640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.859 [2024-07-15 16:20:42.517659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.859 [2024-07-15 16:20:42.517667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.859 [2024-07-15 16:20:42.517883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.859 [2024-07-15 16:20:42.518099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.859 [2024-07-15 16:20:42.518109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.859 [2024-07-15 16:20:42.518116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.859 [2024-07-15 16:20:42.521605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.859 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.859 16:20:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.859 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.859 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.859 [2024-07-15 16:20:42.530862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.859 [2024-07-15 16:20:42.531456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.859 [2024-07-15 16:20:42.531495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d93b0 with addr=10.0.0.2, port=4420 00:29:06.859 [2024-07-15 16:20:42.531506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d93b0 is same with the state(5) to be set 00:29:06.859 [2024-07-15 16:20:42.531743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d93b0 (9): Bad file descriptor 00:29:06.859 [2024-07-15 16:20:42.531962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.859 [2024-07-15 16:20:42.531970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.859 [2024-07-15 16:20:42.531977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.859 [2024-07-15 16:20:42.534908] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.859 [2024-07-15 16:20:42.535478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.859 16:20:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.859 16:20:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2469406 00:29:06.859 [2024-07-15 16:20:42.544752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.859 [2024-07-15 16:20:42.591704] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:16.853 00:29:16.853 Latency(us) 00:29:16.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.853 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:16.853 Verification LBA range: start 0x0 length 0x4000 00:29:16.853 Nvme1n1 : 15.00 8350.47 32.62 9849.80 0.00 7007.93 1044.48 18896.21 00:29:16.853 =================================================================================================================== 00:29:16.853 Total : 8350.47 32.62 9849.80 0.00 7007.93 1044.48 18896.21 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:16.853 rmmod nvme_tcp 00:29:16.853 rmmod nvme_fabrics 00:29:16.853 rmmod nvme_keyring 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2470727 ']' 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2470727 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2470727 ']' 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2470727 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:16.853 16:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2470727 00:29:16.854 16:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:16.854 16:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:16.854 16:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2470727' 00:29:16.854 killing process with pid 2470727 00:29:16.854 16:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2470727 00:29:16.854 16:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2470727 00:29:16.854 16:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:16.854 16:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:16.854 16:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:16.854 16:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:16.854 16:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:16.854 16:20:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.854 16:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:16.854 16:20:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.794 16:20:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:17.794 00:29:17.794 real 0m27.564s 00:29:17.794 user 1m3.091s 00:29:17.794 sys 0m7.013s 00:29:17.794 16:20:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:17.794 16:20:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.794 ************************************ 00:29:17.794 END TEST nvmf_bdevperf 00:29:17.794 ************************************ 00:29:17.794 16:20:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:17.794 16:20:53 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:17.794 16:20:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:17.794 16:20:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:17.794 16:20:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:17.794 ************************************ 00:29:17.794 START TEST nvmf_target_disconnect 00:29:17.794 ************************************ 00:29:17.794 16:20:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:18.055 * Looking for test storage... 00:29:18.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.055 16:20:53 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:18.056 16:20:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:26.196 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:26.196 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:26.196 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:26.196 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.196 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:26.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:29:26.197 00:29:26.197 --- 10.0.0.2 ping statistics --- 00:29:26.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.197 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:29:26.197 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.373 ms 00:29:26.197 00:29:26.197 --- 10.0.0.1 ping statistics --- 00:29:26.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.197 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:29:26.197 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.197 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:26.197 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:26.197 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.197 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:26.197 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:26.197 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.197 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:26.197 16:21:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:26.197 16:21:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:26.197 16:21:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:26.197 16:21:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:26.197 16:21:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:26.197 ************************************ 00:29:26.197 START TEST nvmf_target_disconnect_tc1 00:29:26.197 ************************************ 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:26.197 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.197 [2024-07-15 16:21:01.108455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.197 [2024-07-15 16:21:01.108506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb8e20 with addr=10.0.0.2, port=4420 00:29:26.197 [2024-07-15 16:21:01.108529] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:26.197 [2024-07-15 16:21:01.108538] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:26.197 [2024-07-15 16:21:01.108545] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:26.197 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:26.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:26.197 Initializing NVMe Controllers 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:26.197 00:29:26.197 real 0m0.115s 00:29:26.197 user 0m0.051s 00:29:26.197 sys 0m0.064s 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:26.197 ************************************ 00:29:26.197 END TEST nvmf_target_disconnect_tc1 00:29:26.197 ************************************ 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:26.197 ************************************ 00:29:26.197 START TEST nvmf_target_disconnect_tc2 00:29:26.197 ************************************ 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2476831 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2476831 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2476831 ']' 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:26.197 16:21:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.197 [2024-07-15 16:21:01.255150] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:29:26.197 [2024-07-15 16:21:01.255197] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.197 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.197 [2024-07-15 16:21:01.336876] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:26.197 [2024-07-15 16:21:01.402020] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.197 [2024-07-15 16:21:01.402059] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.197 [2024-07-15 16:21:01.402067] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:26.197 [2024-07-15 16:21:01.402073] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:26.197 [2024-07-15 16:21:01.402079] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.197 [2024-07-15 16:21:01.402618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:26.197 [2024-07-15 16:21:01.402815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:26.197 [2024-07-15 16:21:01.402966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:26.197 [2024-07-15 16:21:01.402995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:26.458 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:26.458 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:26.458 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:26.458 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:26.458 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.459 Malloc0 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.459 [2024-07-15 16:21:02.154210] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.459 [2024-07-15 16:21:02.182552] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2477026 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:26.459 16:21:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:26.459 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.370 16:21:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2476831 00:29:28.370 16:21:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Write completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Write completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Write completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Write completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Write completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Write completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Write completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Write completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Write completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Write completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Read completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 Write completed with error (sct=0, sc=8) 00:29:28.640 starting I/O failed 00:29:28.640 [2024-07-15 16:21:04.211692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.640 [2024-07-15 16:21:04.212074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 16:21:04.212092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.640 qpair failed and we were unable to recover it. 00:29:28.640 [2024-07-15 16:21:04.212604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 16:21:04.212634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.640 qpair failed and we were unable to recover it. 00:29:28.640 [2024-07-15 16:21:04.213068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 16:21:04.213078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.640 qpair failed and we were unable to recover it. 00:29:28.640 [2024-07-15 16:21:04.213661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 16:21:04.213690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.640 qpair failed and we were unable to recover it. 00:29:28.640 [2024-07-15 16:21:04.214128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 16:21:04.214138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.640 qpair failed and we were unable to recover it. 00:29:28.640 [2024-07-15 16:21:04.214542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 16:21:04.214570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.640 qpair failed and we were unable to recover it. 00:29:28.640 [2024-07-15 16:21:04.214940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 16:21:04.214949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.640 qpair failed and we were unable to recover it. 00:29:28.640 [2024-07-15 16:21:04.215381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 16:21:04.215410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.640 qpair failed and we were unable to recover it. 00:29:28.640 [2024-07-15 16:21:04.215885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 16:21:04.215895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.640 qpair failed and we were unable to recover it. 00:29:28.640 [2024-07-15 16:21:04.216346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 16:21:04.216374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.640 qpair failed and we were unable to recover it. 00:29:28.640 [2024-07-15 16:21:04.216823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 16:21:04.216832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.640 qpair failed and we were unable to recover it. 00:29:28.640 [2024-07-15 16:21:04.217351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 16:21:04.217380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.640 qpair failed and we were unable to recover it. 00:29:28.640 [2024-07-15 16:21:04.217694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 16:21:04.217703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.640 qpair failed and we were unable to recover it. 00:29:28.640 [2024-07-15 16:21:04.217986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 16:21:04.217994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.640 qpair failed and we were unable to recover it. 00:29:28.640 [2024-07-15 16:21:04.218535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 16:21:04.218568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.640 qpair failed and we were unable to recover it. 00:29:28.640 [2024-07-15 16:21:04.218962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 16:21:04.218972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.640 qpair failed and we were unable to recover it. 00:29:28.640 [2024-07-15 16:21:04.219388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 16:21:04.219418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.219903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.219912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.220334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.220364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.220713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.220722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.220958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.220967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.221390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.221399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.221726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.221734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.222034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.222041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.222460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.222468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.222663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.222673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.222782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.222789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.223088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.223095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.223400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.223408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.223660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.223667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.223981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.223989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.224407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.224416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.224709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.224717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.224998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.225005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.225442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.225450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.225872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.225879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.226136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.226144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.226379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.226387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.226728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.226735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.227144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.227151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.227526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.227533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.227946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.227954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.228332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.228340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.230456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.230485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.230947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.230955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.231438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.231466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.231852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.231861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.232063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.232071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.232475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.641 [2024-07-15 16:21:04.232483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.641 qpair failed and we were unable to recover it. 00:29:28.641 [2024-07-15 16:21:04.232818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.232824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.233235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.233243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.233680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.233687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.234064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.234071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.234535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.234542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.234917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.234931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.235356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.235385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.235802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.235811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.236227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.236235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.236635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.236643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.237063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.237071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.237372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.237380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.237843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.237849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.238047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.238054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.238432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.238439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.238828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.238836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.239259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.239266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.239694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.239700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.240015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.240022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.240432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.240439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.240735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.240743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.241129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.241137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.241515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.241522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.241776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.241783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.242158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.242165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.242494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.242501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.242864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.242871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.243166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.243173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.243461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.243468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.243851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.243859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.244231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.244238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.244624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.244630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.245011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.245018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.245473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.245481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.245854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.245861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.246274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.246282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.246682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.246688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.247181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 16:21:04.247189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 16:21:04.247411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.247420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.247745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.247752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.248142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.248149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.248553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.248560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.248974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.248981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.249374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.249380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.249755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.249763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.250139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.250149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.250484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.250491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.250903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.250911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.251161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.251169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.251624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.251631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.252010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.252018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.252424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.252433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.252850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.252857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.253226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.253234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.253636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.253643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.254023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.254032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.254372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.254380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.254757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.254763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.255174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.255181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.255577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.255584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.255931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.255938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.256257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.256264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.256659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.256666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.257079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.257086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.257477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.257484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.257839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.257846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.258222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.258229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.258644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.258652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.258853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.258863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.259235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.259243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.259608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.259614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.260039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.260046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.260452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.260460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.260857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.260865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.261255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.261262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.261630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 16:21:04.261637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 16:21:04.262011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.262017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.262426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.262432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.262842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.262849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.263214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.263221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.263572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.263579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.263986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.263992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.264402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.264409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.264783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.264791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.265210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.265218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.265621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.265630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.266017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.266023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.266435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.266442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.266821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.266828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.267209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.267216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.267621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.267629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.267951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.267958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.268354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.268361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.268752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.268760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.269042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.269050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.269489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.269496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.269906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.269913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.270365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.270371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.270784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.270790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.271052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.271060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.271528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.271536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.271898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.271905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.272393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.272421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.272844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.272852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.273348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.273376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.273769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.273777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.274136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.274144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.274552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.274560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.274955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.274961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.275347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 16:21:04.275353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 16:21:04.275753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.275761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.276158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.276165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.276550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.276557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.276934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.276941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.277321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.277328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.277715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.277723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.278012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.278020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.278397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.278405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.278781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.278789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.279169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.279176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.279579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.279586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.280001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.280009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.280323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.280331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.280715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.280722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.281192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.281199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.281593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.281602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.281822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.281831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.282253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.282260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.282671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.282677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.283048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.283054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.283452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.283458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.283745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.283752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.284228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.284237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.284586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.284593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.284993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.285001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.285295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.285302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.285694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.285701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.286094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.286101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.286490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.286498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.286945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.286952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.287444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.287471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.287769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 16:21:04.287777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 16:21:04.288166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.288175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.288619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.288625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.288990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.288997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.289379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.289387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.289778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.289784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.290157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.290165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.290531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.290538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.290934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.290941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.291352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.291359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.291564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.291573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.291947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.291955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.292264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.292272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.292652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.292659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.293056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.293063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.293449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.293456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.293640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.293648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.294001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.294007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.294428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.294437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.294850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.294858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.295244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.295251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.295599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.295607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.296018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.296025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.296394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.296402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.296804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.296813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.297229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.297236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.297636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.297643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.297947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.297954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.298343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.298350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.298739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.298747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.299141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.299148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.299560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.299567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.299934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.299941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.300275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.300283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.300663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.300670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.301088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.301095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.301469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.301476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 16:21:04.301849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 16:21:04.301857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.302250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.302257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.302735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.302742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.303128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.303135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.303518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.303525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.303917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.303923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.304416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.304443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.304650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.304660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.305116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.305132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.305545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.305553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.305828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.305835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.306130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.306137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.306533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.306540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.306941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.306948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.307455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.307483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.307764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.307772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.308298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.308326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.308767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.308775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.309191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.309199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.309624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.309632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.309976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.309983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.310393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.310401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.310773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.310779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.310971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.310981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.311436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.311443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.311820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.311828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.312344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.312373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.312766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.312774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.313194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.313202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.313621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.313629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.314048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.314056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.314323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.314332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.314720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.314728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.315029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.315035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.315429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.315436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.315841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.315849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 16:21:04.316240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 16:21:04.316247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.316653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.316659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.317083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.317089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.317481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.317488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.317891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.317898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.318294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.318302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.318712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.318719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.319130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.319137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.319384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.319391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.319577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.319595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.320031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.320038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.320418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.320426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.320798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.320805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.321218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.321225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.321636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.321645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.322035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.322043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.322452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.322461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.322861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.322869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.323279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.323288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.323560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.323567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.323970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.323977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.324358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.324366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.324669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.324676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.325077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.325084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.325496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.325503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.325909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.325916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.326409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.326437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.326869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.326878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.327392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.327420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.327808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.327816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.328012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.328020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.328416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 16:21:04.328424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 16:21:04.328721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.328729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.329058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.329066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.329477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.329485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.329776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.329783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.330180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.330187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.330570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.330577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.330952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.330959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.331380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.331387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.331587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.331596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.332001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.332008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.332384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.332391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.332809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.332816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.333103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.333111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.333503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.333512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.333816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.333823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.334110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.334117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.334533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.334540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.334936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.334943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.335425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.335453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.335667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.335677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.336080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.336089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.336527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.336534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.336915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.336921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.337338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.337366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.337811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.337820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.338243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.338250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.338652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.338662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.338966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.338973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.339347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.339354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.339720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.339727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.340102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.340108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.340495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.340503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 16:21:04.340802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 16:21:04.340809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.341189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.341196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.341604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.341611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.342087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.342094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.342440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.342448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.342737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.342745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.343140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.343147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.343550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.343557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.343966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.343973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.344360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.344368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.344771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.344778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.344963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.344972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.345327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.345334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.345708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.345716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.346126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.346133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.346493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.346500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.346786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.346794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.347203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.347210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.347604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.347611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.348000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.348007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.348384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.348392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.348808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.348816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.349224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.349231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.349514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.349521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.349915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.349922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.350313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.350320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.350691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.350699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.351090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.351098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.351463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.351471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.351879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.351887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.352350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.352357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.352553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.352563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.352951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.352957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 16:21:04.353357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 16:21:04.353365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.353707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.353717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.354131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.354138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.354385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.354392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.354772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.354779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.355165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.355172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.355359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.355367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.355803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.355810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.356179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.356187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.356494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.356502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.356912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.356919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.357332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.357340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.357733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.357741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.358135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.358142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.358529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.358536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.358926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.358934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.359325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.359332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.359701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.359708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.360150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.360157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.360322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.360330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.360752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.360758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.361168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.361175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.361585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.361592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.361962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.361969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.362163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.362171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.362578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.362585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.362996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.363004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.363415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.363423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.363816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.363825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.364211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.364218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.364595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.364602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.365039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.365045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.365427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.365434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.365821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.365829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.366229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.366236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.366634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.366640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.366831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.366838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.367238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 16:21:04.367245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 16:21:04.367647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.367655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.368032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.368040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.368331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.368338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.368715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.368724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.369135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.369142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.369523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.369530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.369943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.369950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.370242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.370250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.370649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.370656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.371075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.371083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.371477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.371485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.371876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.371882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.372297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.372305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.372580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.372587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.372978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.372985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.373355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.373362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.373771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.373778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.374155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.374163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.374573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.374580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.374973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.374980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.375370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.375378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.375776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.375782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.376194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.376201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.376615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.376622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.376867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.376875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.377293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.377300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.377680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.377687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.378059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.378065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.378449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.378457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.378854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.378862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.379258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.379265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.379678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.379685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.380096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.380103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.380497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.380504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.380814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 16:21:04.380821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 16:21:04.381019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.381027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.381325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.381332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.381757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.381763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.382140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.382147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.382540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.382548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.382932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.382939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.383351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.383358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.383727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.383734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.384129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.384138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.384543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.384551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.384925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.384931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.385341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.385369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.385810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.385818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.386019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.386027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.386430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.386437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.386811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.386817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.387188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.387196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.387482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.387490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.387909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.387916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.388297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.388304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.388513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.388521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.388941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.388947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.389326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.389334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.389621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.389628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.390024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.390030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.390465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.390472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.390843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.390849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.391241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.391249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.391649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.391656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.392039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.392047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.392433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.392440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.392638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.392646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.393006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.393014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.393428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.393437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.393734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.393741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.394174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.394181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.394590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.394596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.394995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 16:21:04.395001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 16:21:04.395374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.395382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.395772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.395780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.396241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.396248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.396631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.396637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.397051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.397058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.397434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.397442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.397734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.397742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.398072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.398079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.398454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.398461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.398874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.398881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.399251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.399260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.399676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.399683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.400100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.400108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.400502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.400509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.400883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.400890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.401399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.401427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.401814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.401822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.402218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.402226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.402614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.402620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.403036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.403043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.403441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.403448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.403846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.403853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.404257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.404265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.404682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.404689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.404983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.404991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.405383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.405390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.405803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.405809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.406321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.406349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.654 [2024-07-15 16:21:04.406738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.654 [2024-07-15 16:21:04.406747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.654 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.407130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.407137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.407547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.407553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.407953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.407960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.408487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.408515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.408905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.408914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.409427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.409454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.409881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.409889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.410393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.410421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.410813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.410822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.411343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.411371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.411787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.411795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.412175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.412183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.412576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.412583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.413001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.413008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.413432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.413440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.413857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.413865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.414332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.414339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.414631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.414639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.415058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.415065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.415472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.415479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.415679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.415688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.416085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.416095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.416538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.416544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.416924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.416932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.417349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.417356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.417561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.417569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.417952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.417958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.418371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.418378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.418777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.418785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.419174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.419181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.419540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.419547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.419959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.419965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.420355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.420361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.655 qpair failed and we were unable to recover it. 00:29:28.655 [2024-07-15 16:21:04.420755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.655 [2024-07-15 16:21:04.420762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.421163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.421171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.421482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.421490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.421890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.421897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.422279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.422286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.422672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.422678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.423093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.423099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.423514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.423522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.423798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.423805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.424189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.424196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.424575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.424582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.424973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.424980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.425364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.425371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.425782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.425788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.426166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.426180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.426576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.426583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.426959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.426966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.427338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.427346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.427759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.427766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.428158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.428165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.428545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.428552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.428963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.428969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.429415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.429422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.429789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.429796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.430168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.430175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.430583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.430591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.431016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.431024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.431472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.431478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.431885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.431894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.432317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.432324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.432740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.432746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.433152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.433159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.433550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.433557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.433970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.433977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.434368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.434375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.656 [2024-07-15 16:21:04.434769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.656 [2024-07-15 16:21:04.434776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.656 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.435070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.435077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.435351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.435358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.435742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.435748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.436031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.436038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.436450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.436456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.436832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.436838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.437255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.437262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.437672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.437679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.438068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.438075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.438487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.438493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.438902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.438910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.439347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.439354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.439762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.439769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.440309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.440337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.440797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.440805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.441181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.441188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.441578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.441584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.441758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.441768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.442196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.442203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.442573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.442580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.442859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.442872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.443275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.443282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.443692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.443699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.444111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.444117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.444510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.444516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.444933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.444940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.445351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.445358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.445778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.445785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.446282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.446310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.446764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.446773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.447026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.447033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.447334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.447341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.447751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.447760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.448171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.448178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.657 [2024-07-15 16:21:04.448466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.657 [2024-07-15 16:21:04.448478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.657 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.448686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.448695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.449098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.449106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.449560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.449567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.449965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.449972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.450364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.450371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.450847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.450853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.451263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.451270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.451679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.451686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.452061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.452068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.452303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.452310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.452679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.452686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.453042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.453049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.453315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.453323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.453710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.453717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.454102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.454109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.454476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.454483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.454866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.454874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.455035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.455045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.455451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.455460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.455874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.455882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.456278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.456284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.456658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.456664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.457077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.457083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.457506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.457513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.457906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.457913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.458290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.458297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.458702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.458708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.459120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.459129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.459530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.459537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.459931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.459938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.460478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.460506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.460935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.460943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.461453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.461481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.461869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.461877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.462372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.462399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.462791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.462800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.463350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.463378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.463777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.463789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.464197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.464204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.658 [2024-07-15 16:21:04.464593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.658 [2024-07-15 16:21:04.464600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.658 qpair failed and we were unable to recover it. 00:29:28.659 [2024-07-15 16:21:04.464982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-07-15 16:21:04.464989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-07-15 16:21:04.465379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-07-15 16:21:04.465386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-07-15 16:21:04.465714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-07-15 16:21:04.465721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-07-15 16:21:04.466102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-07-15 16:21:04.466109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-07-15 16:21:04.466499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-07-15 16:21:04.466506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-07-15 16:21:04.466917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-07-15 16:21:04.466924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-07-15 16:21:04.467537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-07-15 16:21:04.467565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-07-15 16:21:04.467986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-07-15 16:21:04.467995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-07-15 16:21:04.468378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-07-15 16:21:04.468406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-07-15 16:21:04.468885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-07-15 16:21:04.468893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-07-15 16:21:04.469381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-07-15 16:21:04.469409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-07-15 16:21:04.469800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-07-15 16:21:04.469809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-07-15 16:21:04.470341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-07-15 16:21:04.470368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-07-15 16:21:04.470762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-07-15 16:21:04.470770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.659 [2024-07-15 16:21:04.471149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.659 [2024-07-15 16:21:04.471157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.659 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 16:21:04.471444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 16:21:04.471453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 16:21:04.471855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 16:21:04.471863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 16:21:04.472246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.472260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.472662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.472669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.473041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.473047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.473442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.473450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.473732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.473738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.474037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.474044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.474435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.474441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.474821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.474828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.475166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.475173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.475560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.475567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.475987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.475994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.476251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.476258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.476534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.476542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.476956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.476963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.477376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.477383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.477796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.477803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.478198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.478205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.478586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.478593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.478886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.478893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.479296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.479304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.479700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.479710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.480091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.480098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.480482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.480489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.480775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.480781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.481134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.481141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.481531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.481537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.481926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.481933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.482330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.482337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.482784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.482791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.483073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.483079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.483471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.483478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.483848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.483855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.484154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 16:21:04.484161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 16:21:04.484472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.484479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.484850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.484857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.485061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.485071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.485486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.485494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.485762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.485770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.486021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.486028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.486429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.486437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.486826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.486832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.487205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.487212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.487489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.487496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.487782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.487789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.488164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.488170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.488599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.488606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.488995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.489002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.489250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.489258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.489549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.489556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.489928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.489935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.490320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.490327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.490512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.490520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.490970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.490976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.491390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.491397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.491785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.491791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.492173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.492181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.492610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.492616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.492992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.492999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.493377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.493383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.493558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.493565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.493841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.493851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.494148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.494155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.494544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.494550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.494956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.494963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 16:21:04.495356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 16:21:04.495363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.495734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.495741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.496156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.496163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.496557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.496564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.496839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.496846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.497230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.497237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.497508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.497515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.497910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.497917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.498197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.498204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.498502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.498510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.498920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.498926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.499305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.499312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.499617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.499623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.499984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.499991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.500449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.500456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.500826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.500833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.501226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.501234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.501511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.501517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.501889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.501895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.502196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.502209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.502439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.502446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.502826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.502832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.503190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.503197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.503423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.503430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.503816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.503823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.504185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.504191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.504461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.504468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.504889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.504896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.505267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.505274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.505628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.505635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.506047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.506054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.506511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 16:21:04.506518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 16:21:04.506912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.506919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.507240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.507247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.507666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.507673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.507920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.507927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.508191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.508198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.508488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.508494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.508871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.508878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.509270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.509277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.509671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.509677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.510044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.510051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.510325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.510332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.510636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.510644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.511010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.511017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.511433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.511440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.511688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.511695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.512085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.512093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.512482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.512489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.512871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.512878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.513179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.513186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.513554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.513561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.513753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.513761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.514131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.514138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.514542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.514548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.514941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.514947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.515293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.515300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.515719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.515726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.516144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.516151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.516471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.516478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.516869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.516875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.517287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.517294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 16:21:04.517660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 16:21:04.517667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.518057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.518067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.518459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.518466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.518846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.518853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.519225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.519240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.519613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.519619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.520111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.520117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.520494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.520500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.520872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.520879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.521277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.521284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.521653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.521660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.522038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.522046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.522432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.522439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.522813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.522819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.523192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.523199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.523615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.523623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.524052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.524059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.524457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.524465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.524720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.524727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.525142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.525150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.525563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.525570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.525959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.525966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.526441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.526448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.526623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.526629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.527017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.527024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.527329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.527337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.527688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.527694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.527971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.527984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.528386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 16:21:04.528393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 16:21:04.528787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.528793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.529193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.529200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.529605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.529612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.529992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.529999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.530390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.530396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.530724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.530731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.531040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.531047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.531437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.531443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.531852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.531859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.532254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.532261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.532669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.532675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.533054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.533060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.533464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.533472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.533841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.533847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.534219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.534227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.534636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.534643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.535029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.535036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.535420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.535428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.535840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.535846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.536217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.536224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.536624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.536630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.537022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.537029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.537426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 16:21:04.537433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 16:21:04.537808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.537815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.538142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.538149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.538551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.538558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.538967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.538973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.539347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.539354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.539762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.539769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.540178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.540186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.540582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.540589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.541006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.541012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.541399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.541406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.541788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.541794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.542178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.542185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.542562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.542568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.542981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.542988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.543279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.543286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.543560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.543566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.543849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.543863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.544263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.544270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.544667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.544674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.545083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.545091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.545504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.545511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.545812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.545819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.546218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.546225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.546637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.546644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.546919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.546926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.547346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.547353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.547726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.547732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.548145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.548151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.548524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.548531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.548922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.548930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.549296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.549303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.549703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 16:21:04.549709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 16:21:04.550077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.550084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.550495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.550502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.550807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.550814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.551234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.551241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.551611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.551617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.551971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.551978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.552387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.552394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.552805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.552812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.553185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.553192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.553604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.553610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.553982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.553988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.554375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.554382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.554763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.554769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.555162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.555169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.555648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.555654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.556025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.556032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.556283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.556290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.556708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.556714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.557088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.557095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.557388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.557395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.557806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.557813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.558223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.558230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.558636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.558643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.558845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.558853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.559147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.559154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.559600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.559607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.559998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.560005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.560412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.560419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.560790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.560797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.561202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 16:21:04.561209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 16:21:04.561608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.561615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.562057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.562064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.562496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.562503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.562913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.562920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.563114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.563127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.563459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.563466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.563844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.563851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.564325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.564356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.564742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.564750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.565145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.565152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.565544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.565552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.565947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.565955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.566344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.566351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.566736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.566743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.567110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.567116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.567520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.567527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.567895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.567902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.568385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.568413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.568804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.568812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.569204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.569212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.569592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.569598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.569857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.569865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.570273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.570280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.570654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.570661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.571053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.571060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.571444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.571451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.571907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.571913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.572195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.572202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.572593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.572599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.572971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.572977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 16:21:04.573348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 16:21:04.573355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.573748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.573755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.573952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.573960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.574201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.574209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.574599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.574605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.574981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.574987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.575367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.575374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.575794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.575801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.576214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.576222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.576619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.576626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.577018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.577024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.577466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.577473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.577844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.577850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.578002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.578010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.578384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.578390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.578765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.578772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.579059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.579065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.579457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.579466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.579835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.579842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.580218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.580225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.580613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.580620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.580916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.580923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.581321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.581328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.581705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.581712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.581922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.581929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.582340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.582347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.582717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.582723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.583102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.583109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.583519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.583526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.583822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.583830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.584069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.584077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.584467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.584474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.584882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.584889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.585306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 16:21:04.585313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 16:21:04.585728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.585734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.586109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.586115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.586481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.586488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.586880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.586887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.587401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.587428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.587712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.587720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.588109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.588117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.588521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.588528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.588996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.589003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.589355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.589383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.589769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.589779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.590079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.590088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.590473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.590481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.590887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.590894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.591293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.591321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.591711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.591720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.592075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.592081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.592464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.592472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.592884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.592891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.593324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.593352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.593746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.593755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.594171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.594179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.594571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.594578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.594976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.594986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.595198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.595207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.595583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.595590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.596009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.596017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.596427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.596435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.596827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.596835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.597235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.597243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.597663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.597671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 16:21:04.598084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 16:21:04.598092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.598498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.598506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.598902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.598910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.599321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.599329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.599713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.599721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.600015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.600022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.600428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.600436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.600850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.600857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.601120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.601132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.601526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.601533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.601954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.601961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.602449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.602477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.602883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.602891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.603389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.603417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.603837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.603846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.604107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.604115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.604458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.604466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.604890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.604898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.605280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.605308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.605727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.605736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.606132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.606140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.606529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.606536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.606939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.606946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 16:21:04.607448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 16:21:04.607476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.607878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.607888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.608287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.608315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.608702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.608711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.609089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.609097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.609547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.609555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.609844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.609852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.610354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.610382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.610800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.610809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.611139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.611150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.611561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.611569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.611996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.612003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.612389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.612397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.612786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.612794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.613278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.613306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.613720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.613729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.614148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.614156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.614546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.614554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.614955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.614963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.615348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.615355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.615768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.615775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.616217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.616224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.616514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.616529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.616941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.616948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.617319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.617326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.617682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.617688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.618078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.618084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.618500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.618507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.618879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.618886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.619255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.619262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.619436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 16:21:04.619445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 16:21:04.619833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.619840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.620257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.620264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.620553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.620560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.620950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.620957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.621365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.621372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.621781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.621787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.622186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.622193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.622587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.622594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.622785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.622793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.623206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.623218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.623521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.623528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.623936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.623942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.624314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.624321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.624706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.624712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.625115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.625124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.625516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.625523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.625935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.625942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.626438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.626465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.626857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.626868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.627283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.627290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.627693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.627700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.628115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.628126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.628403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.628409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.628780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.628786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.629198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.629205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.629623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.629630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.630041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.630049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.630451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.630458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.630869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.630876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 16:21:04.631324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 16:21:04.631331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.631710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.631716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.632101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.632108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.632530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.632537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.632906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.632913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.633408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.633435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.633825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.633833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.634327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.634354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.634766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.634774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.635147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.635155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.635553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.635559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.635935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.635942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.636314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.636321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.636727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.636734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.637128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.637134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.637412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.637418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.637837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.637844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.638213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.638220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.638609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.638616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.638915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.638921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.639303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.639310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.639709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.639715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.640010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.640017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.640448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.640455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.640826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.640833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.641126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.641133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.641466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.641472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.641881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.641888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.642291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.642318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.642781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.642793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.643186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 16:21:04.643194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 16:21:04.643470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.643478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.643762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.643768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.644168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.644176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.644560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.644566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.644983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.644990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.645372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.645378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.645781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.645788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.646193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.646201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.646593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.646599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.646892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.646905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.647292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.647299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.647681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.647688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.648093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.648099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.648473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.648479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.648861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.648867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.649287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.649294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.649680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.649686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.649886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.649896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.650216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.650224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.650629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.650635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.651047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.651054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.651443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.651450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.651842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.651851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.652270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.652278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.652696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.652703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.653113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.653121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.653529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.653537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.653921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.653928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.654431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.654461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 16:21:04.654879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 16:21:04.654889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.655381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.655410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.655811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.655821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.656236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.656245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.656661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.656669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.657070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.657078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.657489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.657498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.657951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.657959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.658442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.658470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.658875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.658888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.659392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.659422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.659840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.659850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.660354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.660383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.660716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.660726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.660984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.660993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.661455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.661463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.661850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.661858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.662369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.662398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.662804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.662813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.663321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.663350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.663804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.663815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.664214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.664223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.664619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.664627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.665054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.665062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.665448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.665456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.665854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 16:21:04.665862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 16:21:04.666257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.666265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.666604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.666613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.666898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.666907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.667266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.667276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.667712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.667720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.668009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.668018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.668428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.668436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.668827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.668835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.669225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.669233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.669632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.669640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.670052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.670061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.670451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.670460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.670842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.670850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.671148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.671156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.671539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.671547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.671802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.671811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.672203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.672212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.672625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.672633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.673060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.673068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.673460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.673468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.673859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.673867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.674274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.674282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.674692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.674701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.675094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.675104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.675515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.675524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.675932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.675941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.676350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.676358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.676740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.676748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.677139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.677147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 16:21:04.677566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 16:21:04.677573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.677870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.677879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.678272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.678280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.678671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.678678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.679088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.679095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.679311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.679321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.679707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.679715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.680061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.680069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.680483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.680491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.680867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.680876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.681307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.681315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.681712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.681720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.682133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.682141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.682521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.682529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.682919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.682928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.683319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.683328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.683741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.683749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.684191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.684199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.684452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.684460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.684873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.684881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.685294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.685302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.685651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.685660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.686034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.686042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.686426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.686435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.686802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.686810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.687142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.687150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.687550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.687557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.687950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.687958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.688381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.688389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.688798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.688806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.689182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.689190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.689577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.689585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.689997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.690004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 16:21:04.690418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 16:21:04.690426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.690808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.690819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.691205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.691213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.691622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.691631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.692039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.692048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.692434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.692443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.692826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.692834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.693244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.693252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.693627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.693634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.694033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.694042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.694450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.694458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.694870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.694877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.695293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.695301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.695691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.695700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.696086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.696095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.696508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.696517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.696928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.696936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.697140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.697150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.697562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.697571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.697992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.698000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.698476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.698504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.698908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.698918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.699399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.699427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.699806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.699815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.700027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.700039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.700426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.700435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.700835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.700843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.701254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.701262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.701668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.701676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.702068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.702077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.702467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.702476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.702891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.702899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.703408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 16:21:04.703437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 16:21:04.703836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.703846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.704052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.704062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.704451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.704459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.704868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.704876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.705265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.705273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.705565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.705574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.705951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.705960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.706368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.706377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.706767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.706779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.707161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.707169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.707607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.707615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.707990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.707999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.708396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.708403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.708793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.708801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.709304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.709333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.709752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.709762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.710163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.710171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.710566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.710574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.710988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.710995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.711411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.711420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.711804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.711812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.712343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.712372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.712791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.712800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.713230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.713239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.713680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.713687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.714084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.714092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.714500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.714508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.714897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.714905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.715408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.715437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.715853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.715863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.716373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 16:21:04.716403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 16:21:04.716773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.716782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.717172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.717182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.717589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.717597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.717987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.717995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.718382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.718394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.718788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.718796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.719180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.719187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.719570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.719579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.719992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.720000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.720376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.720384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.720795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.720803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.721356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.721385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.721798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.721808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.722245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.722254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.722655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.722662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.723072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.723079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.723363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.723370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.723577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.723587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.723998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.724006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.724475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.724484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.724858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.724866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.725375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.725403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.725703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.725714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.726131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.726139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.726436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.726444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.726864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.726872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.727366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.727395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.727773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.727783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.728114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.728134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.957 [2024-07-15 16:21:04.728530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.957 [2024-07-15 16:21:04.728538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.957 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.728830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.728839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.729368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.729401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.729798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.729808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.730177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.730185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.730583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.730591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.730977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.730986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.731376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.731384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.731726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.731734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.732131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.732139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.732544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.732552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.732970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.732978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.733378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.733386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.733780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.733787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.734198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.734207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.734429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.734439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.734772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.734780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.735170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.735179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.735456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.735465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.735874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.735882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.736264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.736272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.736691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.736699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.737075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.737082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.737495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.737503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.737887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.958 [2024-07-15 16:21:04.737895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.958 qpair failed and we were unable to recover it. 00:29:28.958 [2024-07-15 16:21:04.738312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.738321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.738736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.738744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.739170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.739179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.739579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.739587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.739972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.739981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.740058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.740067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.740341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.740350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.740765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.740773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.741173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.741181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.741463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.741471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.741844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.741851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.742237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.742245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.742637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.742645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.743046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.743055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.743476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.743484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.743793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.743802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.744208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.744217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.744635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.744643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.745012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.745021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.745428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.745436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.745839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.745847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.746243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.746251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.746654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.746663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.747072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.747080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.747473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.747481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.747948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.747958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.748331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.748339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.748760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.959 [2024-07-15 16:21:04.748768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.959 qpair failed and we were unable to recover it. 00:29:28.959 [2024-07-15 16:21:04.748944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.748953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.749362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.749370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.749749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.749760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.750176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.750184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.750488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.750497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.750886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.750894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.751072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.751080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.751461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.751470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.751856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.751864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.752294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.752302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.752678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.752686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.753078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.753087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.753497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.753505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.753810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.753819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.754224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.754232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.754644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.754652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.755046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.755054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.755449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.755457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.755844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.755852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.756151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.756159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.756515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.756523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.756955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.756962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.757354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.757361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.757568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.757576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.757842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.757851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.758264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.758273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.758691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.758699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.759108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.759116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.759534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.960 [2024-07-15 16:21:04.759542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.960 qpair failed and we were unable to recover it. 00:29:28.960 [2024-07-15 16:21:04.759932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-07-15 16:21:04.759940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-07-15 16:21:04.760338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-07-15 16:21:04.760347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-07-15 16:21:04.760767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-07-15 16:21:04.760775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-07-15 16:21:04.760971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-07-15 16:21:04.760979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:28.961 [2024-07-15 16:21:04.761256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.961 [2024-07-15 16:21:04.761265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:28.961 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.761626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.761635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.762003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.762012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.762400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.762409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.762793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.762802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.763203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.763211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.763615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.763624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.763878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.763887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.764274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.764282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.764686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.764696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.765113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.765124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.765498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.765506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.765902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.765911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.766280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.766289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.766698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.766706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.767086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.767094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.767485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.767493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.767909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.767917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.768439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.768468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.768866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.768875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.769317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.769345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.769757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.769767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.770190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.770199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.770602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.770610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.771002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.771011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.771424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.771433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.771842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 16:21:04.771850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 16:21:04.772146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.772155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.772555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.772563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.773025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.773033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.773404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.773413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.773667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.773677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.774082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.774091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.774471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.774480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.774803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.774811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.775201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.775211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.775622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.775630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.776046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.776054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.776457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.776465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.776854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.776862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.777292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.777300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.777677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.777685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.778099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.778107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.778310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.778321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.778516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.778526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.778799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.778808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.779248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.779257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.779647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.779656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.780047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.780056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.780437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.780450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.780886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.780894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.781276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.781284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.781696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.781704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.782112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.782120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.782545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.782554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.782950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.782958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.783455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.783484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.783893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 16:21:04.783903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 16:21:04.784403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.784432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.784833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.784843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.785048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.785057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.785516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.785525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.785904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.785912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.786306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.786315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.786716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.786724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.787104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.787111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.787530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.787539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.787926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.787935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.788426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.788456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.788875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.788885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.789388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.789416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.789822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.789831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.790344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.790374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.790760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.790769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.790979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.790988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.791346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.791354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.791704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.791712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.792090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.792098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.792357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.792366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.792757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.792765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.793155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.793163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.793550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.793559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.793970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.793979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.794369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.794377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.794760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.794768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.795177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.795185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.795592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.795600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 16:21:04.795800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 16:21:04.795809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.796226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.796234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.796620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.796630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.797047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.797055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.797457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.797465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.797859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.797868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.798287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.798296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.798500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.798509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.798883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.798892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.799300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.799308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.799685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.799693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.800106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.800114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.800494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.800503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.800798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.800807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.801203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.801211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.801596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.801604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.802003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.802011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.802429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.802437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.802864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.802871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.803275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.803282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.803671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.803679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.804112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.804120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.804534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.804543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.804953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.804962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.805467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.805496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.805897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.805906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.806415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.806444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.806858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.806867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.807359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.807388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 16:21:04.807791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 16:21:04.807800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.808216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.808224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.808633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.808641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.808905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.808913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.809293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.809301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.809708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.809716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.810133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.810142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.810545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.810554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.810808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.810816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.811226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.811234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.811651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.811659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.812040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.812049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.812436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.812445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.812865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.812876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.813251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.813260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.813649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.813657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.814049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.814057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.814469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.814477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.814891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.814899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.815298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.815306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.815706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.815714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.816131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.816140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.816558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.816566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.816962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.816969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.817357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.817365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.817783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.817791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.818055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.818063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.818446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.818454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.818836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.818844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.819128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.819135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 16:21:04.819407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 16:21:04.819415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.819804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.819811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.820304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.820333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.820756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.820766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.821210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.821219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.821619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.821627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.822016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.822024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.822422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.822430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.822832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.822840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.823049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.823060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.823465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.823474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.823885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.823893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.824315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.824323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.824714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.824723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.824976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.824986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.825371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.825379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.825796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.825804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.826102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.826110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.826490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.826498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.826910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.826917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.827413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.827442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.827844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.827855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.828350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.828379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 16:21:04.828793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 16:21:04.828806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.829175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.829185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.829576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.829584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.829975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.829983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.830391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.830399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.830814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.830822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.831314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.831348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.831750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.831759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.832179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.832187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.832556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.832564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.832942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.832950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.833434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.833442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.833812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.833820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.834200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.834208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.834601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.834610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.835003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.835012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.835416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.835425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.835834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.835843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.836306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.836314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.836504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.836514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.836884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.836892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.837300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.837309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.837706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.837715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.838107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.838116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.838429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.838436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.838832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.838841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.839256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.839263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.839645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.839654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.840065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.840073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.840458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.840466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 16:21:04.840854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 16:21:04.840861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.841256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.841264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.841653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.841661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.842071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.842079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.842460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.842469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.842886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.842894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.843263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.843271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.843654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.843662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.844058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.844067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.844452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.844461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.844861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.844871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.845283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.845291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.845681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.845688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.846081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.846089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.846512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.846520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.846938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.846946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.847424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.847452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.847853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.847863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.848373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.848402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.848828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.848838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.849355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.849383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.849785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.849795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.850211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.850220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.850644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.850652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.850958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.850967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.851377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.851385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.851764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.851772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.852182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.852191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.852572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 16:21:04.852580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 16:21:04.852989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.852997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.853385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.853394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.853804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.853812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.854302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.854330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.854772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.854782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.855154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.855164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.855579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.855587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.855976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.855984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.856384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.856393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.856756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.856764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.857192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.857200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.857590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.857598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.857918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.857925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.858182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.858191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.858603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.858611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.859006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.859014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.859427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.859436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.859843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.859851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.860265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.860273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.860661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.860669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.861060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.861069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.861475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.861487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.861933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.861942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.862305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.862313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.862514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.862524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.862901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.862909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.863329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.863338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.863735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.863743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.864134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.864143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 16:21:04.864534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 16:21:04.864543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.864957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.864966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.865356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.865365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.865754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.865763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.866184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.866192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.866582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.866590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.866989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.866997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.867386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.867394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.867600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.867609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.868022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.868030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.868442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.868450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.868843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.868851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.869261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.869269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.869660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.869668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.870057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.870066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.870455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.870465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.870874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.870882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.871298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.871306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.871688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.871696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.872087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.872096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.872475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.872484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.872894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.872903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.873408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.873436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.873831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.873841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.874226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.874235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.874624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.874633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.875026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.875034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.875443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.875451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.875858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.875867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.876277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.876286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 16:21:04.876672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 16:21:04.876680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.877081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.877088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.877474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.877485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.877894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.877902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.878295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.878303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.878692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.878699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.879073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.879082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.879497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.879506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.879709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.879719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.880127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.880136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.880560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.880568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.880987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.880995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.881469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.881497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.881903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.881912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.882417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.882446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.882859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.882869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.883385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.883414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.883822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.883832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.884358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.884387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.884801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.884811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.885225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.885234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.885634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.885643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.886051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.886060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.886486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.886495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.886917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.886926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.887426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.887455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.887855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.887865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.888282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.888290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.888712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.888721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.889117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.889131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 16:21:04.889611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 16:21:04.889640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.890058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.890069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.890537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.890567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.890981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.890992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.891514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.891543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.891938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.891948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.892469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.892498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.892903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.892913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.893413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.893442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.893885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.893895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.894372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.894401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.894811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.894821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.895128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.895140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.895522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.895531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.895918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.895926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.896454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.896483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.896869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.896878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.901131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.901151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.901461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.901474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.901873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.901886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.902326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.902340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.902776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.902788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.903291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.903319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 16:21:04.903759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 16:21:04.903769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.904130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.904138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.904617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.904646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.905051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.905061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.905546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.905576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.905971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.905981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.906489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.906518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.906921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.906930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.907473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.907501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.907921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.907930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.908421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.908450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.908851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.908860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.909373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.909402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.909816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.909827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.910314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.910343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.910745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.910756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.911139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.911149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.911535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.911543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.911947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.911955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.912435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.912443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.912767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.912775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.913201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.913210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.913633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.913641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.914035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.914044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.914439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.914448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.914654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.914663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.915072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.915082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.915548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.915558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.915945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.915953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.916212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.916223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.916641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.916649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.917026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 16:21:04.917035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 16:21:04.917215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.917224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.917663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.917671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.918041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.918049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.918444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.918453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.918846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.918854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.919266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.919275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.919683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.919692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.919889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.919897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.920240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.920248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.920672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.920680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.921089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.921097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.921485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.921494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.921942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.921951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.922240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.922249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.922675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.922683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.923073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.923082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.923475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.923483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.923903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.923912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.924303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.924332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.924736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.924747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.925140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.925149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.925538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.925546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.925961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.925970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.926368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.926376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.926772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.926784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.927192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.927201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.927623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.927631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.927701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.927711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.928062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.928071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.928457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.928466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 16:21:04.928879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 16:21:04.928887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.929305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.929313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.929698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.929706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.930097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.930105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.930520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.930529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.930937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.930945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.931425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.931454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.931859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.931869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.932394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.932424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.932835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.932845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.933249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.933257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.933656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.933664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.934086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.934094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.934514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.934522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.934736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.934746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.935159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.935172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.935594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.935602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.936012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.936020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.936430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.936438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.936825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.936834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.937243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.937253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.937655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.937663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.938044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.938052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.938437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.938445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.938734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.938742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.939153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.939161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.939561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.939569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.939961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.939969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.940172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.940181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.940553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 16:21:04.940562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 16:21:04.940950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.940958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.941352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.941360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.941767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.941775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.942190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.942198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.942587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.942598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.942996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.943005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.943385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.943393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.943789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.943797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.944093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.944102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.944402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.944410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.944708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.944717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.945097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.945105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.945508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.945516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.945966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.945974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.946449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.946478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.946761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.946770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.947166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.947175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.947384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.947395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.947802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.947811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.948207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.948215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.948613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.948621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.949081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.949089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.949469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.949477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.949866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.949874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.950263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.950272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.950600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.950609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.951050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.951059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.951460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.951468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.951899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.951908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.952317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.952325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.952737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.952745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 16:21:04.953135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 16:21:04.953143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.953531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.953539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.953956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.953964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.954382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.954390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.954780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.954789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.955222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.955231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.955623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.955631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.956048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.956056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.956448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.956457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.956845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.956853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.957244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.957252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.957677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.957684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.958113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.958121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.958529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.958539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.958935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.958943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.959453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.959483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.959885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.959894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.960386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.960414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.960820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.960829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.961352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.961382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.961784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.961794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.962053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.962063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.962462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.962470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.962889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.962898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.963300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.963309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.963746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.963754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 16:21:04.964146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 16:21:04.964155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.964529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.964537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.964933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.964941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.965330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.965338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.965719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.965727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.966139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.966147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.966537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.966545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.966964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.966972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.967365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.967374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.967761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.967770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.967971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.967981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.968338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.968346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.968639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.968647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.969024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.969033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.969444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.969452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.969842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.969849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.970251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.970259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.970678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.970685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.971083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.971091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.971482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.971491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.971886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.971894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.972306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.972314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.972538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.972546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.972947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.972956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.973162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.973172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.973474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.973482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.973883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.973891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.974275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.974286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.974627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.974635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.975050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.975058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 16:21:04.975439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 16:21:04.975447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.975847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.975855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.976194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.976202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.976643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.976652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.976933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.976941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.977334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.977343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.977743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.977751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.978085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.978093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.978480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.978488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.978880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.978888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.979173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.979181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.979570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.979578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.979877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.979884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.980354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.980362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.980763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.980771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.981046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.981054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.981459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.981467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.981858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.981867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.982256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.982264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.982643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.982650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.983113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.983121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.983317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.983326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.983733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.983742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.984156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.984165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.984558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.984566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.984990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.984999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.985424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.985432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.985693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.985701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.986092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.986101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.986380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.986388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.986770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.986778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.987200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.987209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.987612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.987620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.988049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.988057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.988474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.988482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 16:21:04.988872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 16:21:04.988880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.989267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.989275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.989671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.989680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.990081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.990089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.990490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.990499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.990890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.990898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.991386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.991415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.991803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.991812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.992234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.992243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.992655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.992663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.993072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.993080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.993479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.993487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.993876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.993884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.994062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.994072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.994480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.994489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.994810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.994821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.995206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.995215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.995607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.995615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.995870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.995878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.996315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.996323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.996706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.996714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.997025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.997033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.997279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.997286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.997668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.997676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.998087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.998095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.998452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.998460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.998851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.998859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.999250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.999260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:04.999671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:04.999680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:05.000074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:05.000084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 16:21:05.000504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 16:21:05.000513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.000910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.000918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.001293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.001301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.001691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.001699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.002075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.002083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.002491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.002499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.002891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.002899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.003402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.003431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.003835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.003844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.004236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.004245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.004394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.004402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.004763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.004771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.005161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.005173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.005564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.005573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.005982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.005991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.006374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.006383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.006773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.006781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.007197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.007205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.007484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.007492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.007892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.007899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.008223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.008232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.008495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.008503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.008880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.008888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.009282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.009290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.009547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.009554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.009936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.009944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.010336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.010344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.010732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.010739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.011130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.011138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.011535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.011543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.011952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.011961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.012445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.012475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.012877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.012887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.013411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.013440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.013854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.013863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.014372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.014401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.014802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.014812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.015207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 16:21:05.015215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 16:21:05.015596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.015604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.015908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.015917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.016319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.016328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.016741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.016748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.017165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.017174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.017573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.017581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.017974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.017981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.018384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.018392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.018771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.018779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.019166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.019175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.019574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.019583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.019971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.019979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.020274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.020283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.020486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.020497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.020896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.020906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.021347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.021355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.021769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.021777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.022170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.022178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.022646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.022654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.022844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.022853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.023243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.023252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.023645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.023653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.024043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.024052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.024452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.024460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.024762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.024771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.025158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.025167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.025557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.025564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.025966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.025974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.026415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.026424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.026820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.026827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.027217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.027225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.027633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.027642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.028049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.028057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.028451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.028459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.028861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.028869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.029260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.029268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 16:21:05.029684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 16:21:05.029692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.029860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.029868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.030278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.030287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.030676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.030685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.031094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.031103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.031500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.031510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.031892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.031900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.032317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.032325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.032736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.032744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.033130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.033139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.033528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.033536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.033932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.033940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.034464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.034493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.034894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.034903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.035423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.035452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.035854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.035863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.036072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.036081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.036465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.036475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.036869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.036880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.037368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.037397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.037797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.037806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.038201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.038209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.038608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.038616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.039025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.039033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.039427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.039437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.039697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.039707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.040177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.040186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.040574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.040583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.040998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.041006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.041425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.041433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.041828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.041836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.042234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.042242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.042690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.042698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.043095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.043103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.043489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.043498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.043705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.043716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.044092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.044101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.044510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.257 [2024-07-15 16:21:05.044519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.257 qpair failed and we were unable to recover it. 00:29:29.257 [2024-07-15 16:21:05.044933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.044942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.045242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.045250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.045644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.045651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.045807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.045815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.046221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.046229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.046625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.046634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.047043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.047051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.047484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.047492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.047885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.047894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.048277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.048286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.048685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.048694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.049085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.049093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.049546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.049554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.049947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.049955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.050475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.050504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.050715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.050725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.051039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.051048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.051525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.051533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.051948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.051957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.052449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.052478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.052885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.052898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.053327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.053357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.053791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.053800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.054199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.054207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.054608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.054616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.055046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.055054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.055416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.055424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.055821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.055829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.056158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.056168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.056537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.056545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.056988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.056996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.057388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.057397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.057832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.057840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.058222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.058231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.058634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.058643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.059033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.059041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.059455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.059463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.059858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.059866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.060281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.060289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.060687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.060695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.258 [2024-07-15 16:21:05.061078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.258 [2024-07-15 16:21:05.061086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.258 qpair failed and we were unable to recover it. 00:29:29.259 [2024-07-15 16:21:05.061508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.259 [2024-07-15 16:21:05.061516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.259 qpair failed and we were unable to recover it. 00:29:29.259 [2024-07-15 16:21:05.061924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.259 [2024-07-15 16:21:05.061932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.259 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.062410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.062440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.062841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.062851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.063363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.063392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.063822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.063832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.064161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.064170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.064571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.064579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.064984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.064992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.065288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.065296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.065764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.065772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.066129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.066137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.066554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.066562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.066868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.066876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.067386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.067416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.067824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.067834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.068323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.068352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.068636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.068646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.069079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.069088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.069473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.069486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.069873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.069882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.070267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.070276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.070604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.070614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.071009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.071018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.071273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.071283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.071709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.071718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.072399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.072416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.072814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.072823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.073223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.073232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.073415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.073425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.073729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.073738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.074136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.074145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.074549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.074558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.074952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.074961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.075432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.075449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.075854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.075864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.076261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.076269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.076654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.076662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.077076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.077084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.077541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.077549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.077804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-07-15 16:21:05.077812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-07-15 16:21:05.078139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.078147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.078578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.078586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.078978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.078987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.079376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.079385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.079771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.079780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.080193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.080202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.080597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.080605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.080999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.081007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.081483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.081492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.081865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.081873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.082262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.082271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.082665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.082672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.083063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.083070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.083452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.083460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.083921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.083930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.084325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.084334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.084724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.084732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.085143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.085152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.085554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.085563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.085946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.085954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.086252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.086260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.086661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.086669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.087132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.087141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.087334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.087343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.087731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.087740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.088154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.088163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.088558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.088567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.089015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.089023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.089330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.089338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.089736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.089744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.090188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.090196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.090593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.090601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.090999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.091007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.091423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.091431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.091703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.091712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.092090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.092099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.092495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.092504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.092915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.092923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.093315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.093323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.093716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-07-15 16:21:05.093724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-07-15 16:21:05.094131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.094140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.094446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.094454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.094889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.094896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.095387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.095416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.095819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.095829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.096245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.096254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.096652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.096660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.097052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.097059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.097457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.097465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.097837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.097845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.098319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.098329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.098703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.098712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.099112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.099120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.099536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.099544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.099939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.099947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.100440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.100469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.100726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.100736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.101159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.101168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.101643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.101655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.101956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.101965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.102231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.102239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.102530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.102537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.102956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.102963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.103356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.103364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.103756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.103763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.104181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.104189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.104585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.104594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.105006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.105014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.105398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.105407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.105815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.105824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.106219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.106228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.106625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.106634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.107034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.107043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.107229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.107237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.107626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.107634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.108025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.108033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.108365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.108373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.108784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.108792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.109172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.109180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.109579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.109587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-07-15 16:21:05.110053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-07-15 16:21:05.110060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.110434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.110442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.110739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.110748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.111144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.111152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.111552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.111560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.111972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.111980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.112373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.112381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.112770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.112778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.113169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.113178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.113553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.113561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.113953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.113961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.114260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.114269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.114682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.114691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.115102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.115110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.115556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.115565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.115767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.115777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.116221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.116229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.116626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.116634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.117023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.117033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.117445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.117453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.117845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.117853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.118266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.118275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.118668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.118676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.119069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.119078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.119359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.119368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.119760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.119769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.120168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.120177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.120499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.120507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.120908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.120915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.121317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.121326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.121711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.121719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.122120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.122134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.122533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.122541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.122954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.122962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.123449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.123479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.123883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.123894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.124395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.124424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.124842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.124852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.125351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.125380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.125777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.125787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.126173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-07-15 16:21:05.126182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-07-15 16:21:05.126401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.126411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.126738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.126746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.127145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.127153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.127568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.127577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.127993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.128005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.128291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.128300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.128708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.128717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.129107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.129115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.129537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.129545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.129935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.129943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.130425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.130454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.130861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.130870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.131385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.131413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.131855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.131864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.132374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.132403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.132804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.132814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.133239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.133248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.133653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.133662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.134077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.134086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.134354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.134364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.134805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.134813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.135244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.135252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.135642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.135650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.136041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.136049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.136441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.136451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.136840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.136849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.137190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.137199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-07-15 16:21:05.137621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-07-15 16:21:05.137630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.138083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.138091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.138368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.138378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.138768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.138776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.139170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.139178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.139470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.139479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.139880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.139888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.140292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.140300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.140688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.140696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.141109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.141117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.141497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.141505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.141896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.141904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.142297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.142306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.142720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.142728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.143125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.143134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.143525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.143534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.143790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.143800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.144213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.144225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.144621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.144629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.145020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.145028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.145427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.145435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.145850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.145858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.146240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.146248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.146650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.146658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.147055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.147063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.147548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.147557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.147938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.147947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.148321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.148350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.148735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.148744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.149160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.149169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.149563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.149571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.149968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.149976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.150376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.150385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.150803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.150811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.151200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.151209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.151599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.151608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.152049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.152058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.152262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.152272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.152681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.152689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.153080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.153087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.153487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-07-15 16:21:05.153495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-07-15 16:21:05.153912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.153920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.154311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.154321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.154710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.154719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.154977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.154987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.155378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.155387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.155777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.155785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.156174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.156182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.156574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.156582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.156993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.157001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.157391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.157399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.157795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.157804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.158198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.158207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.158414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.158423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.158765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.158774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.159169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.159178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.159578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.159587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.159995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.160005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.160381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.160389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.160780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.160788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.161181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.161189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.161553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.161561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.162001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.162009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.162295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.162303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.162668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.162675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.163085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.163093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.163473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.163482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.163883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.163892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.164294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.164302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.164711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.164719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.165107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.165115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.165501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.165509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.165898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.165907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.166354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.166383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.166681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.166692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.167083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.167092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.167554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.167563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.167905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.167913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.168394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.168422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.168827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.168836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.169234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.169242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.577 [2024-07-15 16:21:05.169645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.577 [2024-07-15 16:21:05.169653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.577 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.170039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.170047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.170444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.170453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.170848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.170857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.171269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.171278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.171668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.171676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.172117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.172128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.172527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.172535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.172940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.172948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.173421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.173450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.173853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.173862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.174382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.174411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.174824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.174833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.175321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.175351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.175761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.175771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.176160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.176168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.176543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.176555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.176960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.176969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.177375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.177384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.177674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.177682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.178071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.178079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.178469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.178478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.178867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.178875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.179265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.179273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.179688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.179697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.180130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.180139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.180341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.180352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.180762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.180770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.181183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.181191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.181578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.181586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.181978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.181986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.182378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.182386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.182799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.182807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.183196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.183204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.183588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.183597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.183986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.183996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.184408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.184417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.184802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.184810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.185309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.185337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.185541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.185551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.185823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.185832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.578 qpair failed and we were unable to recover it. 00:29:29.578 [2024-07-15 16:21:05.186221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.578 [2024-07-15 16:21:05.186230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.186636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.186644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.187040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.187049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.187450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.187458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.187856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.187864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.188253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.188262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.188658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.188667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.188863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.188874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.189187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.189196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.189577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.189586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.189974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.189981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.190396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.190405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.190794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.190802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.191194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.191203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.191663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.191671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.191866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.191878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.192275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.192283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.192569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.192576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.192969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.192977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.193392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.193401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.193794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.193802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.194191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.194199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.194588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.194596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.195009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.195017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.195419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.195428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.195804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.195812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.196073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.196081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.196462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.196470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.196859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.196867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.197311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.197320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.197716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.197724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.198113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.198126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.198409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.198418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.198861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.198869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.579 [2024-07-15 16:21:05.199358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.579 [2024-07-15 16:21:05.199387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.579 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.199766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.199775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.200170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.200179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.200563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.200571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.200962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.200971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.201364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.201372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.201788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.201796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.202193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.202202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.202630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.202639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.203056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.203065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.203452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.203461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.203836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.203844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.204235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.204243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.204622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.204631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.205019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.205027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.205330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.205339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.205740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.205748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.206158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.206166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.206555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.206562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.206952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.206960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.207162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.207173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.207636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.207646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.208042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.208050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.208440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.208448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.208838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.208845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.209253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.209261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.209699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.209707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.210096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.210104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.210306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.210315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.210618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.210626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.211046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.211056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.211421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.211430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.211822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.211830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.212240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.212248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.212623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.212631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.213026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.213034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.213425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.213432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.213853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.213862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.214232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.214241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.214654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.214663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.215023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.215032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.580 qpair failed and we were unable to recover it. 00:29:29.580 [2024-07-15 16:21:05.215422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.580 [2024-07-15 16:21:05.215431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.215821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.215830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.216129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.216138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.216432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.216441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.216852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.216861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.217249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.217257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.217711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.217720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.218116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.218132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.218523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.218533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.218942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.218952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.219433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.219463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.219871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.219882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.220405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.220435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.220644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.220655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.220973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.220982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.221375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.221385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.221798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.221808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.222200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.222209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.222574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.222584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.222977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.222987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.223367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.223381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.223764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.223774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.224169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.224179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.224589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.224599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.225011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.225020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.225351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.225361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.225772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.225781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.226167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.226177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.226554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.226563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.226961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.226971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.227369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.227378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.227769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.227778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.228194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.228205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.228598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.228607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.228994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.229003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.229426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.229435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.229848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.229857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.230252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.230263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.230519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.230529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.230923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.230932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.231347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.581 [2024-07-15 16:21:05.231357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.581 qpair failed and we were unable to recover it. 00:29:29.581 [2024-07-15 16:21:05.231760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.231770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.232155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.232164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.232479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.232489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.232904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.232912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.233329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.233339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.233797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.233807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.234195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.234203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.234493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.234501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.234928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.234938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.235353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.235362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.235754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.235762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.236145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.236154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.236560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.236568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.236866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.236875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.237112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.237120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.237501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.237510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.237899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.237907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.238298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.238306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.238698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.238706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.239026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.239035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.239378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.239387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.239785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.239793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.240211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.240221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.240405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.240415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.240792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.240801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.241117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.241131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.241529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.241537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.241921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.241930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.242322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.242331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.242791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.242798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.243192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.243201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.243493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.243500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.243923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.243931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.244343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.244352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.244737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.244745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.245166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.245175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.245550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.245558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.245949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.245956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.246350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.246359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.246652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.246660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.582 [2024-07-15 16:21:05.247049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.582 [2024-07-15 16:21:05.247058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.582 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.247434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.247442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.247832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.247841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.248256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.248264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.248643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.248651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.249041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.249049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.249499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.249507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.249816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.249825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.250221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.250229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.250626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.250634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.250902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.250909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.251318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.251327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.251717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.251726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.252107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.252116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.252504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.252512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.252886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.252894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.253386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.253415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.253828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.253837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.254258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.254266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.254646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.254659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.255057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.255066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.255486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.255494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.255894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.255902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.256424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.256453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.256867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.256878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.257388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.257417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.257822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.257831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.258260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.258269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.258453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.258462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.258829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.258837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.259234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.259243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.259679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.259687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.260078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.260086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.260482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.260491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.260877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.260886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.261280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.261289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.261583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.261591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.261999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.262006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.262403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.262411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.262828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.262836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.263228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.583 [2024-07-15 16:21:05.263237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.583 qpair failed and we were unable to recover it. 00:29:29.583 [2024-07-15 16:21:05.263646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.263654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.264038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.264047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.264423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.264431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.264821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.264829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.265205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.265213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.265617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.265626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.266008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.266016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.266423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.266431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.266822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.266830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.267213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.267223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.267609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.267617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.268007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.268016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.268505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.268513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.268796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.268804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.269216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.269224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.269627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.269635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.270043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.270051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.270451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.270460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.270786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.270795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.271187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.271196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.271582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.271589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.271979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.271988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.272397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.272405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.272788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.272796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.273189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.273197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.273371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.273382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.273766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.273775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.274233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.274242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.274650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.274659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.275046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.275054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.275446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.275456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.275848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.275857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.276260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.276269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.276653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.584 [2024-07-15 16:21:05.276661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.584 qpair failed and we were unable to recover it. 00:29:29.584 [2024-07-15 16:21:05.277074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.277081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.277462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.277470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.277679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.277687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.278090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.278098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.278460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.278469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.278855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.278863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.279281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.279289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.279685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.279693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.279882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.279891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.280258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.280267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.280667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.280676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.281067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.281075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.281492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.281500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.281938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.281946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.282336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.282344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.282734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.282743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.283154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.283163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.283559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.283567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.283959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.283967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.284431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.284440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.284810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.284819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.285208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.285216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.285613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.285621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.286013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.286021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.286421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.286431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.286822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.286830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.287221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.287229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.287639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.287647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.288059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.288068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.288473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.288482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.288878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.288886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.289275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.289283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.289700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.289708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.290105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.290113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.290501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.290509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.290899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.290907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.291406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.291435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.291723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.291733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.292067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.292076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.292483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.585 [2024-07-15 16:21:05.292492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.585 qpair failed and we were unable to recover it. 00:29:29.585 [2024-07-15 16:21:05.292893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.292902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.293414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.293442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.293851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.293860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.294255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.294264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.294683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.294691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.295010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.295019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.295416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.295424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.295809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.295817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.296238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.296246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.296637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.296645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.296948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.296957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.297433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.297442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.297772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.297780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.298179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.298187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.298586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.298594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.298993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.299001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.299379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.299388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.299774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.299782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.299989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.299996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.300377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.300385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.300769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.300778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.301344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.301373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.301578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.301587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.301997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.302005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.302419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.302432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.302851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.302859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.303360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.303389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.303796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.303805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.304204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.304213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.304600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.304608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.304997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.305006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.305388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.305399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.305789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.305797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.306309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.306338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.306810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.306819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.307206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.307215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.307609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.307617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.308035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.308043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.308404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.308412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.308850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.308858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.586 [2024-07-15 16:21:05.309252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.586 [2024-07-15 16:21:05.309260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.586 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.309738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.309747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.309945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.309956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.310325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.310334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.310725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.310733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.311141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.311149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.311537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.311545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.311935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.311943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.312335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.312345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.312763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.312771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.313162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.313170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.313562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.313571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.313959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.313968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.314287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.314297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.314686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.314694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.315085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.315093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.315476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.315485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.315871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.315879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.316273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.316281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.316490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.316500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.316892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.316900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.317314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.317322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.317711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.317719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.318110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.318119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.318528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.318540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.318953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.318962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.319444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.319473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.319871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.319881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.320393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.320422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.320685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.320695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.321084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.321092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.321453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.321462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.321865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.321874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.322292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.322301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.322689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.322697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.323086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.323094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.323484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.323493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.323919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.323928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.324403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.324432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.324727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.324737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.325132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.325141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.587 qpair failed and we were unable to recover it. 00:29:29.587 [2024-07-15 16:21:05.325346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.587 [2024-07-15 16:21:05.325355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.325758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.325766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.326152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.326161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.326563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.326571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.326996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.327004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.327384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.327392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.327779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.327787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.328181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.328189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.328618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.328626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.329018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.329025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.329403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.329411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.329801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.329809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.330218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.330226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.330627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.330635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.330837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.330845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.331222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.331231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.331655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.331664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.332055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.332063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.332444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.332453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.332824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.332832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.333030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.333038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.333427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.333435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.333824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.333832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.334101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.334110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.334497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.334506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.334894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.334902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.335292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.335300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.335588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.335597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.335971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.335979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.336371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.336379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.336814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.336823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.337262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.337270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.337467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.337474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.337880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.337888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.588 qpair failed and we were unable to recover it. 00:29:29.588 [2024-07-15 16:21:05.338275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.588 [2024-07-15 16:21:05.338283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.338547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.338555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.338966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.338976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.339369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.339378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.339783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.339791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.340207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.340217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.340662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.340670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.341049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.341057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.341452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.341460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.341851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.341859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.342268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.342277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.342595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.342603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.343044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.343053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.343436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.343445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.343858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.343867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.344259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.344268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.344658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.344668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.345057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.345065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.345527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.345535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.345840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.345848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.346249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.346256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.346659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.346667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.347074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.347082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.347464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.347472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.347861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.347869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.348260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.348268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.348689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.348697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.349088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.349096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.349493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.349502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.349663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.349673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.350081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.350090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.350439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.350448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.350830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.350838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.351228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.351242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.351652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.351660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.352043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.352052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.352448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.352456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.352721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.352729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.353142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.353151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.353551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.353559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.353950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.589 [2024-07-15 16:21:05.353957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.589 qpair failed and we were unable to recover it. 00:29:29.589 [2024-07-15 16:21:05.354374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.354382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.354795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.354803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.355193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.355202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.355594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.355602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.355993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.356002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.356415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.356425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.356808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.356817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.357310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.357340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.357743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.357753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.358170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.358179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.358571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.358579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.358972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.358980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.359374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.359382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.359792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.359800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.360005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.360015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.360407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.360419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.360800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.360809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.361222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.361230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.361630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.361639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.362067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.362076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.362468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.362477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.362893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.362901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.363351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.363359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.363741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.363748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.364048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.364057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.364437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.364445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.364836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.364844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.365230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.365239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.365498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.365506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.365917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.365925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.366306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.366314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.366702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.366710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.367097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.367105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.367524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.367532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.367958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.367967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.368452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.368481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.368880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.368890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.369424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.369454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.369859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.369869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.370364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.370393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.590 [2024-07-15 16:21:05.370792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.590 [2024-07-15 16:21:05.370802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.590 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.371225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.371234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.371642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.371650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.372044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.372052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.372448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.372456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.372879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.372888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.373274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.373283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.373650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.373659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.373920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.373929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.374341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.374349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.374743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.374751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.375147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.375156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.375412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.375420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.375830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.375838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.376228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.376236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.376615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.376625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.377014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.377023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.377438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.377448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.377746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.377754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.378165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.378174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.378573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.378581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.378991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.378999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.379417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.379426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.379816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.379825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.380217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.380226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.380573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.380581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.380994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.381003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.381397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.381407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.381800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.381808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.382218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.382226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.382630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.382638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.382897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.382905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.383296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.383304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.383687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.383695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.384088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.384096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.384476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.384484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.384874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.384882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.385138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.385146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.385547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.385556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.385853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.385862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.386263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.386271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.591 qpair failed and we were unable to recover it. 00:29:29.591 [2024-07-15 16:21:05.386660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.591 [2024-07-15 16:21:05.386668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.387060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.387068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.387460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.387468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.387790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.387798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.388202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.388210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.388609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.388617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.389070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.389080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.389466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.389475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.389886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.389894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.390240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.390248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.390646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.390654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.391043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.391051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.391306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.391314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.391712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.391720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.392111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.392120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.392516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.392524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.392934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.392942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.393367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.393396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.393796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.393806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.394204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.394214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.394639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.394648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.395029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.395038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.395335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.395343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.395735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.395743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.396154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.396162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.396554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.396562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.396956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.396964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.397285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.397294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.397700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.397708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.398096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.398104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.398400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.398407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.398820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.398828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.399242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.592 [2024-07-15 16:21:05.399250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.592 qpair failed and we were unable to recover it. 00:29:29.592 [2024-07-15 16:21:05.399656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.593 [2024-07-15 16:21:05.399664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.593 qpair failed and we were unable to recover it. 00:29:29.593 [2024-07-15 16:21:05.400054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.593 [2024-07-15 16:21:05.400062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.593 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 16:21:05.400456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 16:21:05.400466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 16:21:05.400882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 16:21:05.400891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 16:21:05.401283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 16:21:05.401291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 16:21:05.401681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.401689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.402082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.402090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.402500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.402508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.402939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.402948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.403160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.403170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.403405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.403414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.403753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.403761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.404332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.404361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.404764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.404774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.405169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.405178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.405567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.405575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.405983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.405991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.406425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.406434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.406816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.406824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.407314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.407343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.407755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.407764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.408064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.408077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.408470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.408478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.408915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.408923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.409313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.409342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.409748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.409758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.410149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.410157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.410549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.410557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.410971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.410978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.411368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.411376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.411768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.411777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.412169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.412178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.412603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.412612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.412823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.412835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.413233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.413241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.413588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.413596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.414004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.414012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.414419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.414427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.414806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.414814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.415212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.415221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.415644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.415653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.415855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.415865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.416270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.416278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.416573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.416582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.416956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 16:21:05.416963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 16:21:05.417352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.417360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.417662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.417671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.418069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.418077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.418481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.418490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.418814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.418822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.419225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.419233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.419615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.419623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.420033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.420041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.420323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.420332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.420752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.420760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.421150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.421159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.421657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.421666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.421981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.421990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.422381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.422390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.422782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.422790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.423084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.423094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.423476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.423486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.423914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.423923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.424406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.424435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.424849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.424859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.425356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.425385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.425778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.425788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.426183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.426191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.426461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.426470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.426861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.426869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.427258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.427267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.427550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.427558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.427966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.427974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.428406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.428414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.428616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.428626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.429070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.429079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.429495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.429503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.429893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.429901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.430289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.430297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.430689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.430697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.431104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.431113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.431496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.431504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.431901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.431910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.432415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.432445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.432838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.432848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 16:21:05.433245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 16:21:05.433254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.433655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.433664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.434094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.434102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.434493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.434502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.434928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.434937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.435440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.435469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.435880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.435890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.436400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.436429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.437350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.437366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.437794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.437803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.438188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.438196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.438620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.438628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.439061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.439069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.439459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.439468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.439900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.439909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.440311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.440321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.440717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.440728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.441129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.441138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.441535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.441543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.441949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.441957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.442364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.442393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.442796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.442805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.443299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.443328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.443711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.443720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.444108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.444117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.444537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.444546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.444944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.444953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.445465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.445494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.445896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.445906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.446305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.446334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.446750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.446760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.447164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.447173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.447560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.447568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.447958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.447966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.448257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.448265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.448690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.448698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.449091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.449099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.449481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.449489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.449951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.449960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.450746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.450764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.451329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.451357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-07-15 16:21:05.451760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-07-15 16:21:05.451770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.452329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.452358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.452772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.452782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.453175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.453185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.453571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.453580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.454002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.454010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.454307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.454316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.454706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.454714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.455102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.455110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.455537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.455546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.455957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.455965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.456464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.456494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.456700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.456710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.457130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.457138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.457537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.457545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.457944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.457955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.458445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.458475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.458881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.458892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.459406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.459435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.459901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.459911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.460439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.460467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.460749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.460758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.461139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.461148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.461552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.461560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.461978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.461986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.462472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.462481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.462897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.462906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.463370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.463399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.463800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.463810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.464327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.464357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.464771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.464782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.465177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.465186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.465593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.465602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.465992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.466000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.466501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.466517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.466921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.466930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.467254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.467264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.467667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.467675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.468103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.468112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.468505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.468514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.468908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-07-15 16:21:05.468916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-07-15 16:21:05.469345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.469355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.469737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.469747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.470333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.470362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.470779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.470790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.471187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.471196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.471593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.471602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.472003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.472011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.472408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.472416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.472810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.472818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.473212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.473221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.473633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.473642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.474106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.474114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.474500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.474508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.474923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.474931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.475119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.475138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.475542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.475551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.475929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.475938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.476444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.476473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.476892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.476901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.477422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.477451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.477745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.477757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.477968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.477977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.478378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.478387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.478781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.478790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.479190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.479198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.479623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.479632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.480041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.480049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.480451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.480460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.480858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.480866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.481178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.481187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.481562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.481571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.481946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-07-15 16:21:05.481954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-07-15 16:21:05.482298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.482307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.482671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.482679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.483070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.483079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.483460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.483468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.483860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.483868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.484259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.484267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.484655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.484662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.485063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.485071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.485468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.485476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.485886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.485895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.486282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.486292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.486696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.486704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.487096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.487104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.487499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.487507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.487909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.487918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.488426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.488455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.488874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.488884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.489361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.489390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.489795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.489805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.490203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.490212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.490619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.490628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.490910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.490919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.491184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.491196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.491601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.491610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.492015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.492023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.492432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.492441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.492837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.492845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.493259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.493267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.493568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.493576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.493948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.493956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.494213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.494222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.494613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.494621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.495000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.495008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.495404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.495413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.495798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.495806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.496171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.496179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.496569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.496579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.496964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.496973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.497382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.497391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.497770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.497778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-07-15 16:21:05.498200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-07-15 16:21:05.498209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.498637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.498647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.499100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.499110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.499648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.499657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.500047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.500056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.500347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.500356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.500745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.500753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.501135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.501144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.501455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.501463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.501881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.501890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.502279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.502288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.502683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.502692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.503104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.503113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.503504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.503513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.503908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.503916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.504321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.504329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.504725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.504733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.505117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.505131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.505506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.505514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.505889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.505897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.506341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.506370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.506812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.506822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.507350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.507383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.507788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.507798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.508059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.508069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.508550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.508559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.508955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.508963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.509462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.509491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.509898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.509908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.510466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.510495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.510901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.510910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.511420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.511449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.511861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.511871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.512509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.512538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.512876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.512885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.513392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.513421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.513870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.513880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.514428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.514457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.514881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.514891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.515373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.515402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.515687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.515698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-07-15 16:21:05.516006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-07-15 16:21:05.516014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.516509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.516517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.516921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.516930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.517330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.517339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.517731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.517739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.518173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.518181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.518589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.518598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.518854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.518863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.519273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.519281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.519690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.519699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.519993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.520002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.520238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.520246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.520639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.520647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.521036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.521044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.521518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.521527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.521721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.521730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.522042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.522050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.522413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.522422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.522813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.522821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.523113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.523120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.523519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.523527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.523918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.523930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.524378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.524387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.524769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.524777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.525096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.525104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.525496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.525504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.525895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.525903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.526398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.526426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.526840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.526849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.527129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.527137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.527535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.527543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.527964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.527972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.528474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.528503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.528904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.528914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.529414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.529444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.529854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.529863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.530375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.530404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.530804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.530814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.531326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.531355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.531747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-07-15 16:21:05.531756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-07-15 16:21:05.532153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.532161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.532549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.532557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.532949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.532957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.533316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.533325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.533772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.533780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.534148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.534156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.534361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.534371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.534784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.534792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.535135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.535144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.535543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.535551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.535946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.535954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.536375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.536383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.536665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.536675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.537065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.537073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.537493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.537501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.537903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.537912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.538307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.538316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.538706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.538714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.538815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.538824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.539177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.539185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.539553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.539561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.539802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.539810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.540296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.540304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.540718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.540726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.541113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.541121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.541496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.541504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.541861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.541869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.542260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.542267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.542691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.542700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.542971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.542980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.543375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.543383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.543790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.543797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.544185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.544193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.544583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.544592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-07-15 16:21:05.544986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-07-15 16:21:05.544994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.545373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.545382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.545717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.545725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.546127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.546136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.546519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.546528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.546919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.546927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.547309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.547318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.547717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.547725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.548012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.548021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.548333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.548343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.548731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.548740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.549175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.549184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.549572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.549581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.549981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.549989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.550389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.550399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.550675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.550684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.551095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.551104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.551520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.551529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.551942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.551951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.552445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.552474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.552881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.552893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.553403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.553432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.553853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.553863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.554424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.554452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.554827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.554838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.555256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.555264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.555664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.555672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.556071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.556080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.556347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.556357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.556756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.556764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.557166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.557174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.557596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.557604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.558005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.558013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.558312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.558321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.558725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.558734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.559089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.559096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.559246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.559254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.559616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.559624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.560036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.560043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.560433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.560441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.874 [2024-07-15 16:21:05.560893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.874 [2024-07-15 16:21:05.560901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.874 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.561331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.561340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.561629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.561639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.562033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.562042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.562464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.562473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.562851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.562860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.563292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.563301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.563654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.563661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.564051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.564059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.564504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.564513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.564774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.564785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.565168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.565177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.565554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.565562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.565955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.565964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.566416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.566426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.566818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.566826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.567100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.567107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.567496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.567504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.567794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.567804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.568195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.568203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.568639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.568647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.569031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.569039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.569448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.569456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.569851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.569860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.570262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.570270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.570681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.570689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.571100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.571108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.571404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.571413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.571810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.571818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.572208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.572217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.572603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.572612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.573006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.573014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.573322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.573331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.573792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.573800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.574062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.574069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.574489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.574497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.574893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.574902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.575299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.575307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.575567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.575575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.575983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.575991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.576394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.875 [2024-07-15 16:21:05.576402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.875 qpair failed and we were unable to recover it. 00:29:29.875 [2024-07-15 16:21:05.576799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.576807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.577224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.577232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.577557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.577565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.577957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.577966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.578417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.578426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.578807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.578816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.579376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.579405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.579852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.579862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.580251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.580258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.580534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.580540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.580986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.580992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.581402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.581409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.581817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.581823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.582361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.582390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.582792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.582800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.583110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.583118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.583426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.583434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.583832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.583840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.584353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.584383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.584757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.584767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.585179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.585188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.585459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.585468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.585804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.585813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.586223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.586232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.586628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.586636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.587055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.587064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.587374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.587383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.587756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.587765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.588162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.588171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.588510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.588519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.588927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.588936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.589218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.589227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.589633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.589641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.590016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.590025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.590316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.590324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.590721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.590729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.591199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.591208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.591381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.591393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.591792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.591801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.592080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.592089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.592445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.876 [2024-07-15 16:21:05.592455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.876 qpair failed and we were unable to recover it. 00:29:29.876 [2024-07-15 16:21:05.592755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.592764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.593065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.593073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.593476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.593485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.593878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.593887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.594203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.594211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.594621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.594629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.595016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.595025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.595301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.595310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.595591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.595599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.595949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.595959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.596357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.596365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.596785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.596795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.597210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.597220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.597630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.597639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.598033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.598042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.598325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.598333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.598584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.598593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.598996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.599005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.599423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.599432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.599817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.599826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.600231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.600239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.600648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.600656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.601050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.601059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.601362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.601370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.601756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.601764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.601956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.601965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.602224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.602233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.602643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.602651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.603053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.603061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.603454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.603462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.603855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.603863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.604264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.604273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.604683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.604691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.605146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.605155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.877 [2024-07-15 16:21:05.605365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.877 [2024-07-15 16:21:05.605374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.877 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.605688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.605697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.606130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.606140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.606594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.606603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.606925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.606933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.607193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.607201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.607631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.607639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.608019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.608027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.608461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.608469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.608867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.608875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.609262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.609270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.609680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.609688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.610083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.610093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.610493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.610502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.610659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.610667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.611070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.611079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.611389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.611398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.611812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.611820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.612231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.612242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.612566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.612574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.612978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.612986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.613400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.613408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.613817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.613825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.614224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.614233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.614579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.614587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.614680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.614687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.615061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.615069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.615481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.615489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.615885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.615893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.616283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.616291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.616668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.616677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.617070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.617078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.617548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.617556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.617930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.617938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.618336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.618345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.618738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.618747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.619048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.619057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.619350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.619359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.619740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.619749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.619810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.619819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.620226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.878 [2024-07-15 16:21:05.620235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.878 qpair failed and we were unable to recover it. 00:29:29.878 [2024-07-15 16:21:05.620621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.620629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.621027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.621035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.621321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.621329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.621723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.621731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.622154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.622163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.622552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.622560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.622939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.622947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.623316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.623325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.623751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.623759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.624157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.624165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.624581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.624589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.624988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.624996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.625359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.625367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.625767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.625775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.626162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.626170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.626549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.626558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.626957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.626965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.627369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.627379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.627790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.627799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.628204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.628213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.628724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.628733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.629117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.629137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.629534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.629542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.629967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.629975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.630466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.630495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.630902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.630911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.631424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.631453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.631865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.631875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.632349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.632378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.632777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.632787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.632996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.633007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.633400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.633410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.633891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.633900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.634290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.634298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.634699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.634707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.635108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.635116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.635528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.635536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.635933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.635940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.636455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.636484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.636899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.636908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.879 [2024-07-15 16:21:05.637456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.879 [2024-07-15 16:21:05.637485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.879 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.637836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.637846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.638382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.638411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.638801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.638810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.639218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.639227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.639506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.639516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.639636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.639644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.639814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.639822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.640202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.640210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.640590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.640598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.641027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.641034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.641299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.641306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.641713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.641722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.642145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.642154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.642585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.642593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.643030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.643038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.643440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.643448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.643859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.643870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.644162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.644171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.644592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.644600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.644848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.644856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.645253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.645261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.645468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.645476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.645897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.645905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.646196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.646204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.646596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.646604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.647023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.647031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.647385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.647393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.647785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.647793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.648196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.648205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.648617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.648625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.649045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.649053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.649368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.649378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.649770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.649779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.650073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.650081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.650407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.650415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.650822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.650831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.651139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.651148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.651559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.651567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.651773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.651782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.652258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.652267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.880 qpair failed and we were unable to recover it. 00:29:29.880 [2024-07-15 16:21:05.652639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.880 [2024-07-15 16:21:05.652647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.653019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.653027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.653434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.653442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.653733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.653743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.654161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.654169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.654479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.654488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.654929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.654937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.655346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.655354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.655734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.655742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.656150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.656159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.656437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.656445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.656841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.656849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.657237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.657245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.657662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.657670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.658044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.658053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.658379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.658387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.658773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.658784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.659157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.659166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.659568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.659576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.659975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.659983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.660187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.660197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.660461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.660470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.660873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.660883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.661278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.661287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.661651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.661659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.661916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.661924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.662321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.662329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.662714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.662722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.663117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.663129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.663521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.663532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.663827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.663835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.664171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.664180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.664541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.664549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.664973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.664981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.665287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.665296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.881 [2024-07-15 16:21:05.665684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.881 [2024-07-15 16:21:05.665692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.881 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.666088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.666096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.666484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.666492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.666881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.666888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.667343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.667372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.667776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.667785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.668212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.668222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.668633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.668643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.669099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.669108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.669553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.669562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.669941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.669949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.670380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.670409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.670827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.670837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.671343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.671371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.671829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.671839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.672140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.672148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.672548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.672556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.672956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.672964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.673354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.673362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.673644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.673652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.674030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.674039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.674430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.674442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.674703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.674712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.675094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.675102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.675406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.675415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.675814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.675823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.676230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.676238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.676633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.676642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.677036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.882 [2024-07-15 16:21:05.677045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.882 qpair failed and we were unable to recover it. 00:29:29.882 [2024-07-15 16:21:05.677462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.677470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.677884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.677892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.678275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.678283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.678682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.678690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.678987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.678996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.679284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.679293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.679672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.679680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.680065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.680073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.680487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.680496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.680901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.680908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.681348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.681377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.681778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.681787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.682198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.682206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.682494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.682503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.682926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.682935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.683339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.683348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.683809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.683817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.684227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.684236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.684456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.684466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.684833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.684842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.685231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.685240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.685640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.685648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.685947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.685957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.686246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.686254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.686663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.686671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.687090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.687098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.687406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.687415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.687808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.687816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.688212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.688221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.688611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.688619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.688949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.688957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.689346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.689355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.689750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.689759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.690163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.690171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.690534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.690543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.690936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.690944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.691350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.691358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.691770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.691778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.692175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.692183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.692626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.692634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.883 [2024-07-15 16:21:05.693017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.883 [2024-07-15 16:21:05.693025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.883 qpair failed and we were unable to recover it. 00:29:29.884 [2024-07-15 16:21:05.693435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.884 [2024-07-15 16:21:05.693444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.884 qpair failed and we were unable to recover it. 00:29:29.884 [2024-07-15 16:21:05.693835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.884 [2024-07-15 16:21:05.693844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.884 qpair failed and we were unable to recover it. 00:29:29.884 [2024-07-15 16:21:05.694236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.884 [2024-07-15 16:21:05.694245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.884 qpair failed and we were unable to recover it. 00:29:29.884 [2024-07-15 16:21:05.694661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.884 [2024-07-15 16:21:05.694670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.884 qpair failed and we were unable to recover it. 00:29:29.884 [2024-07-15 16:21:05.695091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.884 [2024-07-15 16:21:05.695100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:29.884 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.695402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.695413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.695807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.695815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.696210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.696219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.696640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.696649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.697044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.697052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.697336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.697344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.697742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.697751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.698098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.698106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.698417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.698426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.698820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.698828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.699219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.699227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.699644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.699652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.700052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.700060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.700454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.700463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.700858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.700867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.701217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.701226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.701629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.701637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.702042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.702050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-07-15 16:21:05.702425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-07-15 16:21:05.702434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.702733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.702742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.703133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.703141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.703346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.703355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.703747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.703755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.704169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.704177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.704565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.704573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.704970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.704978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.705393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.705403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.705750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.705758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.706117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.706131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.706520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.706530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.706919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.706928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.707420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.707449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.707891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.707900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.708455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.708484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.708894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.708904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.709403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.709432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.709832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.709842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.710340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.710369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.710668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.710679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.711066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.711076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.711490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.711498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.711874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.711882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.712187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.712204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.712627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.712635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.712925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.712933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.713396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.713404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.713805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.713813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.714230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.714238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.714630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.714638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.714935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.714943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.715234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.715242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.715520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.715527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.715974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.715983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.716285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.716294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.716499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.716509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.716915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.716923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.717334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.717343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.717742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.717751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.166 qpair failed and we were unable to recover it. 00:29:30.166 [2024-07-15 16:21:05.718138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.166 [2024-07-15 16:21:05.718147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.718513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.718522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.718910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.718918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.719410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.719418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.719798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.719806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.719958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.719968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.720402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.720410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.720840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.720848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.721250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.721260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.721661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.721670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.722077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.722086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.722277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.722286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.722712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.722720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.723135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.723142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.723430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.723439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.723833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.723841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.724249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.724258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.724628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.724636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.725029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.725038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.725421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.725430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.725821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.725830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.726248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.726256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.726656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.726665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.726933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.726942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.727360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.727369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.727782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.727791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.728110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.728118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.728516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.728524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.728932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.728940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.729345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.729375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.729779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.729789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.730189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.730198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.730701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.730710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.731101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.731109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.731430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.731439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.731742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.731750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.732172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.732180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.732589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.732598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.732987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.732995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.167 qpair failed and we were unable to recover it. 00:29:30.167 [2024-07-15 16:21:05.733368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.167 [2024-07-15 16:21:05.733377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.733677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.733685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.733960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.733967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.734379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.734387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.734677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.734685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.735086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.735095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.735415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.735425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.735843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.735852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.736263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.736271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.736739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.736748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.737129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.737138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.737510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.737518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.737903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.737912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.738358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.738387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.738674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.738684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.738960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.738969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.739364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.739373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.739766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.739774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.740168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.740176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.740330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.740337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.740741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.740750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.741220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.741228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.741534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.741543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.741885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.741893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.742187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.742196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.742568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.742576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.742995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.743003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.743418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.743427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.743554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.743562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.743945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.743954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.744158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.744169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.744591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.744599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.744996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.745005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.745528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.745557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.746013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.746023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.746422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.746430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.746839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.746852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.747341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.747371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.747764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.747773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.748164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.168 [2024-07-15 16:21:05.748172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.168 qpair failed and we were unable to recover it. 00:29:30.168 [2024-07-15 16:21:05.748600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.748608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.748983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.748991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.749395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.749403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.749793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.749801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.750186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.750195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.750473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.750481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.750895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.750904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.751291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.751300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.751775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.751783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.752033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.752040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.752327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.752336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.752598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.752606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.752991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.752998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.753393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.753401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.753808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.753816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.754268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.754276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.754620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.754628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.755028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.755037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.755318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.755327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.755723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.755732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.756067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.756075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.756521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.756529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.756911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.756920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.757212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.757220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.757628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.757635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.757929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.757938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.758369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.758377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.758784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.758793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.759090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.759099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.759491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.759499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.759908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.759916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.760343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.760372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.760779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.760789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.169 qpair failed and we were unable to recover it. 00:29:30.169 [2024-07-15 16:21:05.761179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.169 [2024-07-15 16:21:05.761188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.761588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.761596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.761987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.761995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.762302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.762314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.762702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.762710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.763118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.763130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.763330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.763338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.763746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.763753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.764165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.764173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.764587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.764595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.764996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.765003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.765382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.765391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.765833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.765842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.766390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.766420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.766828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.766838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.767159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.767168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.767496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.767505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.767925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.767934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.768335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.768343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.768611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.768619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.769012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.769020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.769481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.769489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.769755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.769762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.770144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.770152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.770459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.770468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.770884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.770892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.771279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.771288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.771680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.771689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.772077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.772085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.772471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.772479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.772774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.772783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.773188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.773196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.773595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.773602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.773938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.773947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.774254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.774264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.774563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.774571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.774951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.774959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.775261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.775270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.775710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.775718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.776111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.776119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.776558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.170 [2024-07-15 16:21:05.776566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.170 qpair failed and we were unable to recover it. 00:29:30.170 [2024-07-15 16:21:05.776935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.776943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.777363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.777392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.777794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.777808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.778209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.778219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.778596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.778605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.779006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.779014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.779525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.779534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.779966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.779974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.780488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.780517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.780853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.780862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.781284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.781292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.781674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.781682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.782104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.782112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.782516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.782524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.782989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.782998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.783501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.783529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.783959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.783969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.784472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.784501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.784929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.784938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.785333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.785362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.785767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.785777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.786355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.786384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.786798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.786807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.787097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.787106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.787515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.787524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.787915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.787924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.788388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.788417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.788813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.788823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.789355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.789384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.789868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.789879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.790338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.790367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.790763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.790773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.791071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.791080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.791454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.791462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.791857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.791865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.792382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.792411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.792828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.792837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.793241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.793250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.793717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.793726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.794143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.171 [2024-07-15 16:21:05.794151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.171 qpair failed and we were unable to recover it. 00:29:30.171 [2024-07-15 16:21:05.794539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.794547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.794921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.794929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.795290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.795303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.795699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.795708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.796126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.796134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.796426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.796435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.796640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.796650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.797053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.797062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.797462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.797470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.797806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.797815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.798216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.798224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.798537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.798546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.798941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.798949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.799148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.799158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.799444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.799454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.799831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.799840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.800256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.800265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.800672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.800681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.801028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.801036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.801437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.801445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.801716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.801724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.802120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.802131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.802547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.802555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.803011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.803019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.803436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.803445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.803839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.803847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.804238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.804246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.804642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.804651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.805068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.805076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.805277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.805286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.805697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.805706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.805997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.806007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.806452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.806460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.806843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.806852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.807253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.807261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.807577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.807585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.807977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.807985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.808389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.808398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.808815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.808823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.809251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.809259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.809683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.172 [2024-07-15 16:21:05.809691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.172 qpair failed and we were unable to recover it. 00:29:30.172 [2024-07-15 16:21:05.809987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.809995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.810403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.810413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.810780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.810788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.811327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.811356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.811754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.811763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.812061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.812070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.812399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.812407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.812788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.812796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.813186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.813194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.813596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.813604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.813902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.813911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.814336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.814344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.814729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.814737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.815085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.815093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.815496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.815505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.815793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.815802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.816209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.816217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.816555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.816563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.816952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.816960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.817267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.817277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.817577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.817585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.817970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.817979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.818267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.818275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.818555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.818564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.818949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.818957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.819376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.819385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.819776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.819785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.820182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.820190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.820584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.820593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.820984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.820992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.821397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.821406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.821791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.821799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.822185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.822194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.822623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.822631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.822923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.822932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.823249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.823257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.823649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.823657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.824046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.824054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.824507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.824516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.824933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.824941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.173 qpair failed and we were unable to recover it. 00:29:30.173 [2024-07-15 16:21:05.825257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.173 [2024-07-15 16:21:05.825265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.825614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.825623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.826017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.826025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.826452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.826460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.826841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.826849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.827268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.827276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.827680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.827689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.828077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.828085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.828496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.828505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.828798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.828807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.829098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.829106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.829512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.829520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.829927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.829935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.830369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.830397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.830693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.830704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.831061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.831070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.831469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.831477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.831874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.831883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.832152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.832162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.832575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.832584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.832888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.832897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.833047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.833056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.833442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.833450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.833913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.833921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.834219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.834229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.834544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.834552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.834945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.834953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.835350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.835359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.835751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.835759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.836165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.836174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.836564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.836572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.836747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.836757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.837173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.837181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.837559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.837567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.174 [2024-07-15 16:21:05.837985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.174 [2024-07-15 16:21:05.837993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.174 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.838353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.838362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.838746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.838754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.839055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.839064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.839394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.839402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.839806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.839814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.840087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.840094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.840396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.840407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.840823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.840830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.841215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.841223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.841515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.841523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.841934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.841942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.842234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.842243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.842654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.842662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.843040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.843048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.843331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.843339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.843746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.843754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.844059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.844068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.844388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.844396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.844801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.844810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.845212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.845219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.845638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.845647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.846038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.846046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.846330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.846338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.846746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.846754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.847136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.847144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.847341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.847350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.847719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.847727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.848119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.848132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.848432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.848440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.848825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.848833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.849228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.849236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.849634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.849642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.849974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.849983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.850412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.850420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.850814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.850822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.851337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.851366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.851756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.851765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.852161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.852170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.852582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.852590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.852877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.175 [2024-07-15 16:21:05.852885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.175 qpair failed and we were unable to recover it. 00:29:30.175 [2024-07-15 16:21:05.853199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.853207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.853615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.853623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.854046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.854054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.854350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.854359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.854766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.854773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.855177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.855185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.855552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.855563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.855951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.855959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.856343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.856351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.856727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.856736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.857140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.857148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.857556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.857564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.857945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.857953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.858344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.858352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.858761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.858769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.859178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.859187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.859487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.859497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.859792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.859800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.860192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.860201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.860625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.860633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.861015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.861023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.861314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.861322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.861712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.861720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.862090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.862098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.862493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.862501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.862888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.862896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.863198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.863206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.863654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.863662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.864073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.864082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.864458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.864467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.864861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.864870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.865254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.865263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.865697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.865705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.866091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.866100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.866457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.866465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.866865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.866874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.867439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.867468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.867874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.867883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.176 qpair failed and we were unable to recover it. 00:29:30.176 [2024-07-15 16:21:05.868379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.176 [2024-07-15 16:21:05.868408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.868810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.868819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.869253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.869262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.869655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.869663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.870060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.870069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.870585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.870593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.870972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.870981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.871383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.871412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.871814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.871826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.872339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.872368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.872783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.872793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.873185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.873194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.873505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.873513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.873918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.873925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.874324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.874332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.874725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.874732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.875119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.875131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.875515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.875524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.875731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.875742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.876099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.876108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.876503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.876511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.876906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.876914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.877414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.877443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.877847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.877857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.878395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.878424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.878822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.878832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.879139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.879148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.879638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.879646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.880044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.880052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.880355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.880364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.880775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.880782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.881170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.881179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.881583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.881591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.881921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.881929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.882320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.882329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.882620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.882628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.883019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.883027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.883412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.883420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.883834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.883841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.884232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.884241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.177 [2024-07-15 16:21:05.884551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.177 [2024-07-15 16:21:05.884559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.177 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.884949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.884957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.885363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.885372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.885767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.885776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.886168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.886177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.886574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.886582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.886996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.887003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.887324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.887332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.887756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.887765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.888150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.888158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.888502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.888511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.888888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.888896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.889290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.889298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.889695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.889703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.889972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.889981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.890361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.890369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.890659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.890667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.891144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.891152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.891517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.891525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.891907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.891916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.892265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.892273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.892595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.892603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.892898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.892906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.893322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.893330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.893723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.893731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.894116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.894128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.894424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.894433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.894825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.894833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.895234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.895242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.895538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.895547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.895966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.895974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.896424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.896432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.896846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.896854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.897112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.897120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.897532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.897540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.897932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.178 [2024-07-15 16:21:05.897941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.178 qpair failed and we were unable to recover it. 00:29:30.178 [2024-07-15 16:21:05.898424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.898453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.898869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.898880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.899409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.899438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.899831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.899841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.900371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.900400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.900843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.900852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.901351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.901379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.901780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.901791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.902192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.902201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.902610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.902618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.902973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.902981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.903400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.903408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.903818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.903830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.904372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.904401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.904791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.904800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.905142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.905151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.905529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.905537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.905643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.905651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.906045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.906053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.906510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.906519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.906911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.906919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.907346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.907354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.907719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.907727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.908133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.908141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.908340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.908348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.908749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.908757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.909174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.909184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.909590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.909598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.909987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.909996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.910379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.910387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.910805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.910813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.911076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.911085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.911270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.911278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.911670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.911679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.912052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.912061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.912361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.912371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.912561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.912570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.912843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.912852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.913246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.913254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.913637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.913646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.179 [2024-07-15 16:21:05.913936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.179 [2024-07-15 16:21:05.913944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.179 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.914299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.914307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.914697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.914704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.915097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.915106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.915524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.915533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.915920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.915928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.916279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.916288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.916637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.916646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.917031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.917040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.917442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.917450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.917870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.917878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.918092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.918101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.918411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.918425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.918821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.918829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.919205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.919214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.919630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.919638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.919945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.919954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.920325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.920333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.920721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.920729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.921114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.921127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.921512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.921520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.921913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.921921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.922551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.922580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.923003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.923012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.923328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.923337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.923750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.923758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.924159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.924168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.924536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.924544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.924933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.924942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.925286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.925294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.925731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.925739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.926144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.926152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.926534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.926543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.926962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.926971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.927367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.927375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.927671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.927680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.928128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.928137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.928515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.928523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.928937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.928945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.929445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.929474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.929892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.180 [2024-07-15 16:21:05.929901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.180 qpair failed and we were unable to recover it. 00:29:30.180 [2024-07-15 16:21:05.930109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.930120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.930410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.930418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.930718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.930727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.931120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.931132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.931514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.931521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.931937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.931944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.932374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.932403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.932805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.932815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.933353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.933382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.933782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.933792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.934176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.934184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.934567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.934575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.934975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.934983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.935379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.935388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.935784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.935791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.936186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.936195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.936502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.936511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.936777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.936786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.937233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.937241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.937642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.937650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.938042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.938051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.938449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.938458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.938850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.938859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.939214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.939223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.939635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.939643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.940067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.940076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.940389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.940399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.940793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.940803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.941197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.941205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.941618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.941626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.942057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.942065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.942458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.942466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.942847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.942855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.943165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.943174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.943557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.943564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.943957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.943966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.944359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.944368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.944757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.944765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.945236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.945247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.945648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.945656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.946054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.181 [2024-07-15 16:21:05.946062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-07-15 16:21:05.946460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.946468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.946861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.946870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.947283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.947291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.947702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.947710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.947976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.947984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.948292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.948301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.948706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.948715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.949108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.949117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.949543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.949553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.949935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.949944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.950464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.950494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.950925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.950935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.951473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.951502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.951868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.951877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.952350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.952378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.952783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.952793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.952973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.952984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.953402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.953410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.953814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.953821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.954395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.954423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.954732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.954743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.955138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.955147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.955385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.955396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.955840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.955848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.956268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.956277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.956636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.956644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.957053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.957061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.957453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.957462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.957665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.957674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.958080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.958088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.958512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.958520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.958896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.958905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.959278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.959288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.959677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.959686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.960072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.182 [2024-07-15 16:21:05.960080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-07-15 16:21:05.960466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.960475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.960870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.960878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.961269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.961280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.961684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.961692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.962087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.962095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.962497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.962506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.962869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.962878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.963401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.963431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.963830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.963839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.964209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.964218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.964648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.964656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.965051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.965060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.965439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.965447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.965782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.965791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.966166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.966174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.966588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.966596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.966867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.966875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.967262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.967270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.967655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.967664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.968026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.968034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.968390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.968399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.968705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.968714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.969057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.969065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.969469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.969478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.969867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.969876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.970275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.970284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.970640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.970649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.971036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.971045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.971321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.971329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.971633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.971641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.972039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.972046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.972401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.972411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.972817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.972825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.973157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.973165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.973560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.973568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.973746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.973756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.974162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.974171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.974677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.974685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.975077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.183 [2024-07-15 16:21:05.975085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-07-15 16:21:05.975475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.975484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.975878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.975886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.976305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.976313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.976708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.976720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.977125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.977134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.977519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.977527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.977943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.977951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.978390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.978420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.978844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.978854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.979129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.979138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.979529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.979537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.979743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.979752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.980168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.980177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.980480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.980488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.980850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.980859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.981255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.981263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.981533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.981541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.981839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.981847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.982276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.982284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.982768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.982776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.983174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.983182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.983650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.983658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.984070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.984081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.984484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.984493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.984749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.984758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.985144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.985152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.985429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.985436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.985716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.985724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.986118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.986130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.986448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.986458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.986910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.986918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.987410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.987418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.987820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.987828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.988220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.988228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.988654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.988662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.989064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.989072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.989382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.989390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.989679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.989686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.990067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.990075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.990393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.990402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.184 qpair failed and we were unable to recover it. 00:29:30.184 [2024-07-15 16:21:05.990796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.184 [2024-07-15 16:21:05.990804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.185 qpair failed and we were unable to recover it. 00:29:30.185 [2024-07-15 16:21:05.991201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.185 [2024-07-15 16:21:05.991209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.185 qpair failed and we were unable to recover it. 00:29:30.185 [2024-07-15 16:21:05.991633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.185 [2024-07-15 16:21:05.991642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.185 qpair failed and we were unable to recover it. 00:29:30.185 [2024-07-15 16:21:05.991835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.185 [2024-07-15 16:21:05.991845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.185 qpair failed and we were unable to recover it. 00:29:30.185 [2024-07-15 16:21:05.992106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.185 [2024-07-15 16:21:05.992114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.185 qpair failed and we were unable to recover it. 00:29:30.185 [2024-07-15 16:21:05.992524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.185 [2024-07-15 16:21:05.992532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.185 qpair failed and we were unable to recover it. 00:29:30.185 [2024-07-15 16:21:05.992911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.185 [2024-07-15 16:21:05.992920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.185 qpair failed and we were unable to recover it. 00:29:30.185 [2024-07-15 16:21:05.993241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.185 [2024-07-15 16:21:05.993250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.185 qpair failed and we were unable to recover it. 00:29:30.185 [2024-07-15 16:21:05.993678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.185 [2024-07-15 16:21:05.993687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.185 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:05.994078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:05.994088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:05.994562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:05.994571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:05.994779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:05.994788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:05.995197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:05.995206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:05.995594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:05.995602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:05.996051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:05.996060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:05.996523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:05.996531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:05.996924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:05.996933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:05.997344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:05.997352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:05.997761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:05.997770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:05.998167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:05.998175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:05.998472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:05.998481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:05.998866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:05.998874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:05.999285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:05.999294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:05.999577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:05.999584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:05.999848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:05.999856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:06.000277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:06.000286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:06.000541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:06.000548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:06.000685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:06.000693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:06.001065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:06.001073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:06.001268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:06.001276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-07-15 16:21:06.001635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-07-15 16:21:06.001644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.002060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.002069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.002474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.002483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.002740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.002748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.002953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.002962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.003323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.003331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.003722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.003730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.003950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.003958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.004323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.004331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.004742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.004750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.005005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.005013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.005308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.005316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.005739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.005747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.006040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.006053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.006326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.006335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.006733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.006741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.007086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.007095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.007467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.007475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.007870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.007879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.008333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.008341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.008714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.008722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.009047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.009055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.009478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.009486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.009760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.009768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.010143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.010151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.010521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.010529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.010919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.010927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.011336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.011344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.011757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.011765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.012202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.012211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.012614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.012623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.013042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.013050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.013432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.013440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.013809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.013817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.014211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.014219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.014652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.014660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.015018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.015026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.015489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.015498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.015889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.015897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.016323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.016331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.016750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-07-15 16:21:06.016759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-07-15 16:21:06.017194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.017202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.017478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.017486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.017880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.017889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.018307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.018315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.018585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.018593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.018981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.018989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.019379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.019387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.019752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.019760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.020152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.020160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.020549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.020557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.020949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.020958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.021359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.021368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.021761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.021771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.022165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.022174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.022568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.022576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.022913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.022920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.023271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.023280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.023686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.023694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.024160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.024168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.024555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.024562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.024949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.024957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.025252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.025261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.025670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.025679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.026092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.026100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.026509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.026517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.026902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.026910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.027237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.027246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.027615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.027623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.027918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.027925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.028330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.028339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.028748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.028757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.029134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.029144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.029490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.029498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.029886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.029894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.030281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.030290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.030684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.030692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.031056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.031064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.031226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.031236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.031532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-07-15 16:21:06.031541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-07-15 16:21:06.031886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.031894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.032281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.032289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.032692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.032701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.033096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.033104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.033500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.033508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.033893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.033901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.034302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.034310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.034703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.034711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.035002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.035010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.035402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.035411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.035809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.035818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.036254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.036262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.036652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.036660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.037051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.037061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.037459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.037467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.037811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.037819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.038240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.038249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.038639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.038647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.039107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.039114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.039512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.039520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.039818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.039827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.040132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.040140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.040521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.040529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.040916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.040923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.041244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.041253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.041664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.041672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.042149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.042158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.042545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.042553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.042930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.042938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.043307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.043315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.043635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.043644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.044047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.044055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.044454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.044463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.044741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.044749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.045132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.045140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.045474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.045482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.045928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.045936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.046246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.046256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.046677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.046685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.047072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-07-15 16:21:06.047080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-07-15 16:21:06.047246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.047253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.047685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.047693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.048139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.048147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.048552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.048560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.048841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.048850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.049235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.049244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.049636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.049645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.050038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.050046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.050321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.050329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.050663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.050672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.050979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.050987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.051392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.051401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.051786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.051794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.052134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.052145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.052568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.052576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.052981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.052989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.053398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.053406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.053714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.053723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.053912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.053921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.054226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.054234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.054637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.054645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.055055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.055063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.055471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.055480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.055727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.055736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.056035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.056043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.056442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.056450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.056842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.056850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.057246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.057254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.057599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.057608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.058003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.058011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.058397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.058406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.058805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-07-15 16:21:06.058813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-07-15 16:21:06.059200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.059208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.059653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.059660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.059952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.059960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.060332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.060341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.060743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.060751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.061161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.061169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.061557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.061565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.061856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.061865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.062281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.062289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.062686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.062694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.063086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.063095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.063498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.063507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.063894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.063902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.064327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.064335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.064783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.064791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.065053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.065060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.065454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.065462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.065853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.065862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.066315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.066323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.066725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.066734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.067148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.067157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.067566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.067576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.067948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.067956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.068344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.068352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.068628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.068636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.069016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.069024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.069323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.069340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.069716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.069724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.070137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.070145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.070515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.070523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.070913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.070922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.071332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.071341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.071738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.071746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.072137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.072145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.072427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.072436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.072627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.072637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.073000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.073009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.073406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.073415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.073853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.073862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.074257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.074266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-07-15 16:21:06.074655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-07-15 16:21:06.074663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.075105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.075113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.075523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.075531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.075914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.075922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.076179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.076187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.076594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.076602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.076787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.076796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.077172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.077180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.077598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.077607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.078037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.078046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.078498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.078506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.078900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.078908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.079189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.079197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.079510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.079518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.079912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.079920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.080333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.080341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.080755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.080763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.081156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.081165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.081388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.081397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.081801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.081809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.082223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.082231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.082625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.082636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.083040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.083048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.083461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.083469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.083884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.083891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.084094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.084103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.084492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.084500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.084890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.084899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.085156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.085165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.085600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.085608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.086018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.086026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.086473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.086481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.086897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.086905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.087378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.087386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.087776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.087784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.088184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.088192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.088566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.088575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.088967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.088976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.089376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.089386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.089655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.089664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-07-15 16:21:06.090050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-07-15 16:21:06.090058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.090453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.090461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.090841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.090849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.091250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.091258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.091682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.091690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.092073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.092082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.092485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.092494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.092809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.092818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.093241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.093249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.093658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.093666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.094153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.094161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.094567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.094575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.094971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.094979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.095387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.095395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.095811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.095820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.096223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.096231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.096706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.096715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.097012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.097021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.097394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.097402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.097875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.097883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.098312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.098320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.098713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.098722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.099114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.099126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.099551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.099559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.099967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.099975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.100473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.100502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.100762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.100773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.101164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.101173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.101565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.101574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.101950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.101958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.102356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.102364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.102770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.102778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.103202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.103210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.103600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.103608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.104011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.104019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.104608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.104617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.104886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.104894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.105279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.105288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.105677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.105685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.105978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.105987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.106381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-07-15 16:21:06.106389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-07-15 16:21:06.106801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.106808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.107343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.107372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.107774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.107783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.108076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.108092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.108320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.108328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.108733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.108742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.109163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.109172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.109479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.109488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.109903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.109911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.110101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.110109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.110420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.110428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.110819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.110827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.111253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.111261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.111670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.111678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.112095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.112103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.112510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.112518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.112906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.112914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.113465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.113495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.113777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.113787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.114088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.114097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.114472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.114483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.114884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.114892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.115346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.115375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.115626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.115637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.116055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.116063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.116467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.116476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.116872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.116881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.117198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.117207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.117631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.117639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.117994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.118002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.118424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.118432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.118822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.118830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.119342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.119371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.119576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.119586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.120016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.120024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.120324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.120333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.120728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.120736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.121136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.121145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.121594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.121603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.121808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.121816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.462 [2024-07-15 16:21:06.122238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.462 [2024-07-15 16:21:06.122246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.462 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.122611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.122620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.122917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.122926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.123199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.123207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.123601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.123609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.123899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.123908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.124282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.124290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.124678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.124686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.125070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.125078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.125384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.125392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.125803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.125811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.126235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.126244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.126647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.126655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.127033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.127041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.127376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.127384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.127752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.127759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.127945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.127954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.128272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.128280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.128698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.128706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.129086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.129095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.129507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.129516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.129815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.129824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.130217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.130226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.130690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.130697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.130788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.130796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.131068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.131076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.131441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.131449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.131864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.131872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.132264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.132272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.132545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.132553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.132939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.132947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.133149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.133158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.133459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.133467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.133868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.133876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.463 [2024-07-15 16:21:06.134272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.463 [2024-07-15 16:21:06.134281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.463 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.134688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.134697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.135087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.135095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.135501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.135509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.135830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.135838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.136250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.136258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.136533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.136542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.136939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.136947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.137286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.137295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.137708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.137717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.138053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.138062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.138456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.138464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.138882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.138891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.139284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.139294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.139702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.139711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.140102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.140109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.140271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.140279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.140477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.140486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.140910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.140918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.141227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.141236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.141652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.141660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.142075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.142083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.142497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.142505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.142804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.142813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.143230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.143238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.143639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.143647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.144068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.144076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.144503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.144512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.144904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.144912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.145332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.145341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.145649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.145658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.145846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.145855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.146286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.146295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.146776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.146784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.147168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.147176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.147471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.147480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.147885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.147893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.148283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.148291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.148598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.148607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.149000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.149007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.149291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.464 [2024-07-15 16:21:06.149299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.464 qpair failed and we were unable to recover it. 00:29:30.464 [2024-07-15 16:21:06.149583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.149591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.149973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.149982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.150393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.150402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.150698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.150706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.150971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.150979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.151376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.151384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.151771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.151779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.151985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.151994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.152384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.152392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.152780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.152789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.153181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.153190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.153613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.153622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.153916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.153926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.154224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.154232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.154666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.154674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.154935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.154943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.155246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.155254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.155671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.155679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.155947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.155955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.156258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.156267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.156664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.156672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.157073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.157080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.157486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.157495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.157996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.158005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.158428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.158458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.158855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.158865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.159368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.159397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.159799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.159809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.160228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.160237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.160651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.160659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.161055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.161063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.161375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.161385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.161648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.161657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.162080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.162089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.162513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.162522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.162887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.162895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.163280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.163288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.163677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.163685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.163937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.163945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.164318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.164326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.465 [2024-07-15 16:21:06.164686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.465 [2024-07-15 16:21:06.164694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.465 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.165096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.165105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.165517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.165527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.165923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.165933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.166386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.166415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.166834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.166844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.167051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.167059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.167482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.167491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.167778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.167789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.168208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.168216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.168494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.168502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.168924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.168932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.169403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.169414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.169725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.169735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.170132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.170140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.170424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.170432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.170767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.170775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.171194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.171203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.171292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.171302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.171610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.171618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.172010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.172019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.172441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.172449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.172863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.172871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.173273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.173281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.173736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.173745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.174132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.174141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.174538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.174547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.174944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.174952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.175352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.175360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.175755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.175763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.176175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.176183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.176480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.176489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.176891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.176899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.177317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.177325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.177708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.177717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.178038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.178046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.178433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.178441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.178833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.178841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.179251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.179259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.179668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.179676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.180094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 16:21:06.180102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 16:21:06.180401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.180410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.180806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.180814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.181220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.181228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.181528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.181537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.181930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.181938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.182332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.182341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.182814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.182822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.183208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.183216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.183597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.183605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.183991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.183999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.184287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.184295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.184711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.184721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.185187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.185195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.185628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.185636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.186030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.186038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.186319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.186327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.186767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.186775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.187194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.187203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.187597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.187605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.188005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.188014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.188404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.188413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.188838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.188847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.189248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.189256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.189552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.189560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.189827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.189836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.190250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.190258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.190649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.190657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.191049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.191057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.191280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.191289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.191522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.191530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 16:21:06.191934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 16:21:06.191942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.192335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.192344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.192732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.192741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.193150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.193159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.193447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.193456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.193841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.193849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.194235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.194244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.194724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.194732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.195029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.195038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.195429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.195437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.195822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.195830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.196217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.196226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.196621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.196629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.196926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.196935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.197419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.197428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.197720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.197729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.198024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.198032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.198435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.198443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.198741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.198750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.199166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.199174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.199583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.199590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.199885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.199895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.200280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.200289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.200708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.200716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.201153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.201163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.201512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.201521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.201910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.201919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.202282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.202292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.202712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.202720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.203121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.203133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.203435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.203442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.203828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.203836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.204234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.204243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.204656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.204664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.205042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.205051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.205457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.205466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.205862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.205871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2476831 Killed "${NVMF_APP[@]}" "$@" 00:29:30.468 [2024-07-15 16:21:06.206265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.206275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 16:21:06.206592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 16:21:06.206600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 16:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:30.468 [2024-07-15 16:21:06.206996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.207004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 16:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:30.469 [2024-07-15 16:21:06.207490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.207499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 16:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:30.469 16:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:30.469 [2024-07-15 16:21:06.207883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.207891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 16:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.469 [2024-07-15 16:21:06.208398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.208427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.208729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.208739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.209040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.209048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.209441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.209453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.209664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.209674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.209980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.209989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.210270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.210279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.210692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.210700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.211005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.211014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.211418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.211426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.211827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.211835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.212226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.212233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.212488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.212499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.212946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.212954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.213358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.213367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.213581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.213591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.213997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.214005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.214412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.214421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.214811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.214819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.215214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.215222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 16:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2477897 00:29:30.469 16:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2477897 00:29:30.469 [2024-07-15 16:21:06.215620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.215629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 16:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2477897 ']' 00:29:30.469 16:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:30.469 [2024-07-15 16:21:06.216045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.216054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 16:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.469 [2024-07-15 16:21:06.216427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.216436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 16:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:30.469 16:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.469 [2024-07-15 16:21:06.216892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.216901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 16:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:30.469 16:21:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.469 [2024-07-15 16:21:06.217249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.217260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.217684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.217692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.217895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.217905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.218303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.218312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.218706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.218715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.219144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.219153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.219523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.219531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 16:21:06.219808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 16:21:06.219817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.220236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.220245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.220410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.220419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.220805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.220813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.221288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.221297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.221573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.221582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.221997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.222006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.222405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.222415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.222800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.222809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.223206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.223215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.223611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.223620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.224007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.224016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.224288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.224297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.224693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.224702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.224988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.224998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.225302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.225311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.225694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.225702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.226092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.226101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.226297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.226307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.226691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.226700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.226971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.226979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.227367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.227378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.227799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.227807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.228185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.228194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.228684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.228693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.229081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.229090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.229478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.229487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.229868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.229877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.230259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.230268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.230659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.230668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.231058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.231067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.231463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.231471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.231871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.231879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.232138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.232146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.232540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.232548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.232964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.232972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.233382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.233390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.233762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.233771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.233929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.233937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.234351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.234360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.234738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 16:21:06.234746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 16:21:06.235134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.235144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.235506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.235515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.235902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.235911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.236407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.236437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.236833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.236843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.237366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.237394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.237810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.237820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.238018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.238026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.238305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.238315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.238691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.238700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.239091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.239099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.239483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.239493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.239884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.239894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.240153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.240162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.240566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.240574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.240960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.240968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.241375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.241383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.241805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.241814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.242117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.242130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.242530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.242538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.242918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.242929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.243441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.243470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.243891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.243902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.244351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.244381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.244648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.244658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.244935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.244944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.245332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.245341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.245983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.245998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.246481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.246491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.246909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.246918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.247131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.247142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.247638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.247646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.247899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.247907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.248278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 16:21:06.248287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 16:21:06.248739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.248748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.249186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.249195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.249624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.249633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.250040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.250049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.250445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.250453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.250841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.250849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.251339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.251348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.251645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.251653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.252040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.252048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.252148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.252157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.252492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.252502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.252711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.252718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.253075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.253083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.253469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.253477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.253889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.253897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.254279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.254287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.254688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.254697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.255081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.255089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.255505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.255514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.255785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.255793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.255938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.255946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.256362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.256371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.256770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.256778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.257175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.257184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.257601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.257609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.258007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.258016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.258340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.258354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.258711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.258720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.258986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.258995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.259513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.259522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.259917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.259925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.260360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.260389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.260632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.260642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.261041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.261049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.261289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.261298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.261694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.261702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.262040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.262049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.262512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.262520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.262926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.262934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 16:21:06.263355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 16:21:06.263364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.263804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.263813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.264210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.264219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.264534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.264542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.264960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.264968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.265253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.265260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.265668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.265676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.266002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.266010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.266423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.266432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.266856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.266864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.267256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.267264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.267517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.267525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.267942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.267950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.268261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.268269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.268709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.268717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.268757] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:29:30.473 [2024-07-15 16:21:06.268802] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.473 [2024-07-15 16:21:06.268934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.268942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.269365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.269374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.269567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.269577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.269945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.269952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.270371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.270380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.270732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.270740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.271166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.271175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.271580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.271589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.272061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.272070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.272391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.272400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.272787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.272795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.273120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.273135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.273527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.273536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.273875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.273884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.274310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.274319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.274537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.274546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.274920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.274928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.275341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.275349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.275769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.275778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.276037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.276046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.276508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.276517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.276914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.276923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.277287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.277295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.277653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 16:21:06.277662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 16:21:06.278063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.278071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.278490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.278499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.278936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.278945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.279341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.279371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.279762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.279773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.280165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.280175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.280673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.280681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.280915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.280924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.281334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.281342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.281744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.281752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.282186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.282194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.282591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.282599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.282805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.282815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.283040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.283047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.283454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.283462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.283849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.283857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.284285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.284293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.284693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.284701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.285088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.285096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.285169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.285178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.285682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.285689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.286076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.286084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.286206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.286214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.286599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.286607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 16:21:06.286996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 16:21:06.287004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.744 [2024-07-15 16:21:06.287414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.744 [2024-07-15 16:21:06.287423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.744 qpair failed and we were unable to recover it. 00:29:30.744 [2024-07-15 16:21:06.287820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.744 [2024-07-15 16:21:06.287829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.744 qpair failed and we were unable to recover it. 00:29:30.744 [2024-07-15 16:21:06.288240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.744 [2024-07-15 16:21:06.288251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.744 qpair failed and we were unable to recover it. 00:29:30.744 [2024-07-15 16:21:06.288651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.744 [2024-07-15 16:21:06.288659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.744 qpair failed and we were unable to recover it. 00:29:30.744 [2024-07-15 16:21:06.289082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.744 [2024-07-15 16:21:06.289090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.744 qpair failed and we were unable to recover it. 00:29:30.744 [2024-07-15 16:21:06.289491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.744 [2024-07-15 16:21:06.289500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.744 qpair failed and we were unable to recover it. 00:29:30.744 [2024-07-15 16:21:06.289923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.744 [2024-07-15 16:21:06.289932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.744 qpair failed and we were unable to recover it. 00:29:30.744 [2024-07-15 16:21:06.290430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.290459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.290856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.290866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.291429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.291458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.291881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.291890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.291978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.291985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.292348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.292357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.292620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.292627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.293040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.293048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.293423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.293431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.293878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.293887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.294285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.294294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.294689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.294697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.295094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.295102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.295499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.295508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.295760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.295767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.295972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.295980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.296157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.296165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.296578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.296586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.296928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.296935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.297323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.297331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.297708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.297717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.297927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.297940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.298147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.298156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.298547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.298555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.298976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.298984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.299376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.299384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.299772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.299779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.300167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.300175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.300343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.300352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.745 [2024-07-15 16:21:06.300800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.300808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.301022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.301029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.301441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.301449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.301707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.301715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.302137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.302145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.302543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.302551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.302939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.302948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.303323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.303331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.303643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.303653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.745 qpair failed and we were unable to recover it. 00:29:30.745 [2024-07-15 16:21:06.304108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.745 [2024-07-15 16:21:06.304116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.304524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.304532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.304954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.304961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.305352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.305360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.305749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.305757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.306144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.306153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.306500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.306508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.306900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.306908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.307299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.307308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.307697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.307705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.308126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.308137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.308491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.308499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.308888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.308895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.309397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.309426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.309782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.309792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.310185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.310193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.310581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.310589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.310847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.310855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.311274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.311282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.311668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.311676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.311935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.311944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.312332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.312340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.312766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.312774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.313162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.313171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.313593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.313602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.313830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.313838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.314022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.314032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.314406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.314414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.314801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.314809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.315204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.315212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.315604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.315612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.315693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.315699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.316002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.316011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.316268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.316276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.316524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.316532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.316962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.316970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.317367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.317376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.317799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.746 [2024-07-15 16:21:06.317810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.746 qpair failed and we were unable to recover it. 00:29:30.746 [2024-07-15 16:21:06.318210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.318218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.318608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.318615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.319009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.319016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.319425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.319434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.319821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.319828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.320210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.320218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.320605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.320613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.321006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.321015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.321435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.321445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.321868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.321876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.322265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.322274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.322383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.322390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.322769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.322777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.323190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.323198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.323585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.323593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.323985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.323993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.324336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.324344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.324757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.324765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.325064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.325072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.325466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.325475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.325870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.325879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.326262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.326271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.326473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.326483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.326891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.326899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.327204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.327211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.327424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.327431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.327840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.327848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.328235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.328243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.328613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.328622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.329005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.329014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.329416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.329424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.329613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.329621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.330016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.330024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.330438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.330446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.330830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.330839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.331226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.331234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.331600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.331608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.332019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.332027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.332505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.332513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.332909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.332921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.747 [2024-07-15 16:21:06.333318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.747 [2024-07-15 16:21:06.333328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.747 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.333741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.333749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.334145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.334153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.334611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.334619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.335006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.335014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.335405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.335413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.335802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.335810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.336194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.336202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.336598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.336605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.337013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.337021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.337209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:30.748 [2024-07-15 16:21:06.337434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.337443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.337728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.337736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.337996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.338006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.338397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.338405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.338792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.338800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.339193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.339201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.339578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.339586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.340001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.340009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.340424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.340432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.340825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.340834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.341187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.341196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.341632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.341640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.342080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.342089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.342390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.342398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.342790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.342798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.343210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.343218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.343616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.343625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.344021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.344030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.344437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.344447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.344860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.344869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.345198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.345207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.345613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.345622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.346021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.346029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.346428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.346436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.346829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.346837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.347229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.347237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.347632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.748 [2024-07-15 16:21:06.347640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.748 qpair failed and we were unable to recover it. 00:29:30.748 [2024-07-15 16:21:06.348053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.348062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.348454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.348463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.348852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.348860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.349062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.349071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.349474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.349484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.349916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.349925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.350238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.350246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.350532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.350541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.350958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.350966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.351359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.351367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.351570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.351578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.351966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.351975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.352280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.352288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.352687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.352695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.353087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.353095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.353391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.353402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.353794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.353803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.354211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.354219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.354435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.354443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.354848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.354857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.355269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.355277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.355670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.355678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.356075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.356083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.356471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.356479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.356873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.356881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.357096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.357103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.357498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.357507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.357902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.357911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.358420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.358449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.358856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.358867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.359379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.749 [2024-07-15 16:21:06.359408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.749 qpair failed and we were unable to recover it. 00:29:30.749 [2024-07-15 16:21:06.359757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.359767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.360193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.360201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.360546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.360556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.360948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.360956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.361351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.361360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.361652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.361660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.362056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.362063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.362456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.362465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.362867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.362875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.363186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.363196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.363598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.363607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.364009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.364018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.364444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.364453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.364869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.364878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.365271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.365280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.365666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.365675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.366065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.366073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.366455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.366463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.366849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.366857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.367241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.367249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.367632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.367641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.367943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.367952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.368355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.368363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.368758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.368766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.369186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.369196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.369674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.369683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.370100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.370108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.370513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.370521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.370915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.370923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.371322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.371352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.371627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.371637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.372055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.372064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.372464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.372473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.372895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.372903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.373300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.373308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.373763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.373771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.374171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.374179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.374565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.374573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.375010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.375019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.375441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.375449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.750 [2024-07-15 16:21:06.375709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.750 [2024-07-15 16:21:06.375718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.750 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.376139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.376148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.376576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.376584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.376970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.376978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.377372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.377380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.377807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.377815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.378210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.378218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.378625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.378633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.379031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.379039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.379426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.379435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.379830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.379839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.380235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.380244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.380624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.380633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.381046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.381054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.381460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.381468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.381861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.381869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.382264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.382272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.382530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.382538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.382933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.382942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.383341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.383349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.383842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.383851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.384168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.384176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.384577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.384586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.384984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.384992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.385376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.385386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.385802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.385810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.386307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.386336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.386788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.386797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.387195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.387204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.387636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.387644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.387971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.387980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.388381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.388389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.388791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.388799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.389225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.389234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.389664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.389672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.390078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.390085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.390466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.390474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.390819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.390827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.391242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.391261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.391690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.391699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.392127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.751 [2024-07-15 16:21:06.392136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.751 qpair failed and we were unable to recover it. 00:29:30.751 [2024-07-15 16:21:06.392504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.392513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.392908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.392917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.393408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.393437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.393773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.393782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.393859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.393868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.394274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.394283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.394688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.394696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.395099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.395107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.395372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.395380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.395817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.395825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.396223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.396231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.396633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.396641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.397027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.397036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.397432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.397441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.397840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.397849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.398153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.398161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.398598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.398606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.399046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.399056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.399346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.399354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.399755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.399763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.400164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.400173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.400606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.400615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.401050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.401057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.401463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.401473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.401870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.401879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.402259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.402267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.402667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.402675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.403069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.403078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.403474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.403483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.403853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.403861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.404261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.404269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.404685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.404694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.405095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.405103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.405521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.405530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.405930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.405939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.406434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.406463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.406865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.406875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.407047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.752 [2024-07-15 16:21:06.407069] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.752 [2024-07-15 16:21:06.407075] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.752 [2024-07-15 16:21:06.407081] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.752 [2024-07-15 16:21:06.407085] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.752 [2024-07-15 16:21:06.407385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.752 [2024-07-15 16:21:06.407299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:30.752 [2024-07-15 16:21:06.407413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.752 qpair failed and we were unable to recover it. 00:29:30.752 [2024-07-15 16:21:06.407453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:30.752 [2024-07-15 16:21:06.407626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:30.752 [2024-07-15 16:21:06.407628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:30.753 [2024-07-15 16:21:06.407883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.407892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.408434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.408464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.408873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.408883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.409336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.409365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.409774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.409783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.410188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.410196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.410599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.410607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.410955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.410963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.411376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.411385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.411799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.411809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.412260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.412269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.412680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.412688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.413082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.413090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.413427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.413436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.413830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.413838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.414154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.414163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.414471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.414479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.414873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.414880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.415229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.415237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.415658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.415666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.416073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.416081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.416341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.416350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.416749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.416765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.417049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.417056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.417320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.417330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.417722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.417730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.418118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.418130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.418544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.418553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.418952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.418961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.419514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.419543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.419945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.419954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.420476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.420505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.420785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.420795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.421061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.421069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.753 [2024-07-15 16:21:06.421291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.753 [2024-07-15 16:21:06.421299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.753 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.421716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.421724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.422178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.422188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.422593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.422600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.422874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.422882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.423281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.423290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.423667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.423675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.424071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.424079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.424472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.424480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.424903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.424912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.425208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.425217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.425611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.425619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.426007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.426014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.426433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.426442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.426680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.426688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.427126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.427134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.427523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.427531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.427954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.427962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.428453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.428482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.428887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.428896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.429423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.429452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.429871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.429881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.430348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.430378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.430657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.430666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.430972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.430981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.431420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.431429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.431555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.431564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.431952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.431960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.432233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.432244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.432672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.432680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.433074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.433082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.433484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.433492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.433884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.433892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.434314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.434323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.434714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.434722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.435118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.435130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.435517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.435527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.435943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.435951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.436164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.436175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.436587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.436595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.754 qpair failed and we were unable to recover it. 00:29:30.754 [2024-07-15 16:21:06.436981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.754 [2024-07-15 16:21:06.436989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.437485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.437514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.437922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.437932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.438427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.438456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.438724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.438733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.438924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.438933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.439146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.439155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.439381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.439390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.439697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.439706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.440140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.440148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.440365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.440376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.440768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.440776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.440998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.441005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.441408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.441417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.441811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.441818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.442211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.442219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.442613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.442621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.443035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.443043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.443442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.443450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.443843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.443852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.444237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.444246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.444649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.444657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.445046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.445054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.445440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.445448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.445865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.445874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.446280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.446288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.446680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.446688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.447088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.447095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.447482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.447492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.447884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.447892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.448149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.448157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.448555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.448563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.448969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.448977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.449393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.449401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.449849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.449857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.450370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.450400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.450883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.450893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.451427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.451457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.451667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.451676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.452085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.452093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.452511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.755 [2024-07-15 16:21:06.452520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.755 qpair failed and we were unable to recover it. 00:29:30.755 [2024-07-15 16:21:06.452945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.452954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.453358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.453388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.453820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.453829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.454044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.454051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.454423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.454431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.454819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.454828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.455221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.455230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.455672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.455680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.456098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.456106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.456287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.456295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.456657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.456665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.457061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.457069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.457376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.457385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.457778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.457786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.457997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.458004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.458396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.458405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.458781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.458789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.459194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.459202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.459588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.459596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.459808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.459816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.460198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.460207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.460605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.460613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.461008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.461016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.461220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.461230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.461517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.461526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.461706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.461715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.462115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.462128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.462515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.462526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.462942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.462950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.463253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.463261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.463659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.463667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.464065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.464073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.464456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.464464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.464852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.464860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.465259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.465268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.465739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.465748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.466175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.466183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.466388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.466395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.466766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.466774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.467163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.467171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.467578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.756 [2024-07-15 16:21:06.467587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.756 qpair failed and we were unable to recover it. 00:29:30.756 [2024-07-15 16:21:06.467990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.467999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.468410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.468418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.468851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.468860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.469279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.469287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.469709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.469717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.470043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.470052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.470500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.470508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.470719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.470726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.471019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.471027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.471425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.471433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.471877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.471885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.472302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.472311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.472642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.472651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.473057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.473066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.473449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.473458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.473863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.473872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.474267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.474275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.474690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.474698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.475089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.475097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.475514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.475522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.475914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.475922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.476433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.476463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.476862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.476872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.477140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.477150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.477538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.477547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.477949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.477958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.478132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.478144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.478513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.478521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.478727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.478734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.479146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.479154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.479558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.479566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.479775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.479782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.480031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.480040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.480356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.480363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.480754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.480762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.480968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.480976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.757 [2024-07-15 16:21:06.481346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.757 [2024-07-15 16:21:06.481354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.757 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.481752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.481761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.481951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.481962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.482380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.482389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.482784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.482792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.483089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.483098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.483481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.483490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.483699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.483707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.484076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.484084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.484509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.484518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.484914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.484922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.485348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.485357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.485753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.485762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.486154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.486163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.486469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.486476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.486896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.486904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.487194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.487202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.487417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.487425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.487764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.487773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.488190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.488198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.488607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.488615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.489018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.489026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.489462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.489470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.489900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.489909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.490302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.490311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.490706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.490715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.490973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.490981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.491401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.491410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.491804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.491812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.492233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.492241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.492687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.492697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.492917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.492925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.493133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.493146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.493542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.493551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.493945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.493953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.494466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.494496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.494754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.494765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.495000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.495008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.495413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.758 [2024-07-15 16:21:06.495422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.758 qpair failed and we were unable to recover it. 00:29:30.758 [2024-07-15 16:21:06.495837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.495845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.496239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.496247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.496675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.496683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.496861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.496869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.497226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.497235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.497632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.497641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.497902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.497911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.498399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.498407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.498605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.498615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.499025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.499033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.499446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.499455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.499716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.499725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.500109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.500117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.500527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.500536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.500956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.500964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.501362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.501370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.501799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.501806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.502065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.502073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.502480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.502489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.502895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.502903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.503435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.503465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.503771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.503780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.503991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.503998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.504409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.504418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.504851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.504859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.505366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.505395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.505792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.505801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.506110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.506119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.506512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.506521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.506926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.506935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.507489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.507518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.507779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.507791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.508210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.508218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.508627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.508635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.508898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.508905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.509300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.509308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.509531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.509538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.509807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.509816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.510210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.510219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.510636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.510644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.759 [2024-07-15 16:21:06.511059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.759 [2024-07-15 16:21:06.511067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.759 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.511465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.511474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.511869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.511877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.512272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.512281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.512476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.512486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.512816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.512825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.513230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.513238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.513618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.513626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.514002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.514010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.514408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.514416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.514851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.514859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.515251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.515260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.515678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.515687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.516083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.516091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.516501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.516510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.516905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.516914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.517327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.517335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.517727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.517735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.518137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.518146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.518554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.518562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.518951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.518959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.519215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.519224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.519640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.519648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.520043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.520052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.520406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.520415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.520819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.520828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.521222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.521230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.521625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.521633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.522048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.522056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.522449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.522457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.522854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.522861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.523258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.523268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.523650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.523658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.524052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.524061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.524447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.524457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.524849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.524858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.525276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.525285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.525769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.525778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.526165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.526173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.526573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.526581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.527000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.760 [2024-07-15 16:21:06.527008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.760 qpair failed and we were unable to recover it. 00:29:30.760 [2024-07-15 16:21:06.527422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.527430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.527777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.527785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.528172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.528180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.528595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.528604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.528827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.528836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.529158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.529167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.529559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.529567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.529983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.529991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.530383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.530391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.530795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.530803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.531205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.531213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.531530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.531538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.531941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.531949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.532327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.532334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.532591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.532598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.533012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.533019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.533438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.533446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.533840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.533848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.534242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.534250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.534632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.534640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.535030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.535037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.535338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.535346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.535570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.535579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.535972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.535981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.536388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.536397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.536791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.536799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.537222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.537230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.537639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.537647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.538032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.538041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.538299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.538306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.538705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.538714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.539163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.539171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.539461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.539470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.539863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.539871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.540264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.540272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.540671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.540680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.540939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.540946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.541331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.541339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.541734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.541742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.541989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.541996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.542418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.761 [2024-07-15 16:21:06.542427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.761 qpair failed and we were unable to recover it. 00:29:30.761 [2024-07-15 16:21:06.542826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.542834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.543127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.543136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.543410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.543419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.543817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.543825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.544328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.544357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.544765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.544775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.545191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.545199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.545675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.545683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.546068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.546076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.546461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.546470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.546890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.546899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.547304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.547312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.547753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.547761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.547981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.547989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.548377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.548386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.548594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.548605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.548927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.548936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.549335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.549344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.549763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.549771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.550168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.550176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.550579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.550587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.550978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.550986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.551406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.551415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.551800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.551808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.552202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.552210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.552626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.552636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.553063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.553072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.553492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.553501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.553780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.553796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.554001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.554014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.554397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.554405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.554792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.554800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.555196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.555205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.555597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.555606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.762 qpair failed and we were unable to recover it. 00:29:30.762 [2024-07-15 16:21:06.556025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.762 [2024-07-15 16:21:06.556034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.556420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.556428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.556843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.556850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.557107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.557114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.557532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.557541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.557974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.557981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.558353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.558382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.558589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.558598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.558967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.558977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.559376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.559385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.559786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.559794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.560194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.560203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.560634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.560642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.560975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.560983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.561381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.561389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.561782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.561790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.562205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.562213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.562607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.562615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.563012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.563020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.563227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.563237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.563473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.563481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.563875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.563884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.564279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.564287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.564684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.564693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.564957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.564966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.565181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.565191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.565603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.565612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.565812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.565821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.566090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.566099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.566494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.566502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.566904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.566912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.567316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.567325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.567702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.567710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.567932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.567941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.568325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.568333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.568715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.568725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.569144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.569153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.569421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.569429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.569835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.569843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.570067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.570074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.570477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 16:21:06.570485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 16:21:06.570878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 16:21:06.570886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 16:21:06.571279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 16:21:06.571287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 16:21:06.571682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 16:21:06.571690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 16:21:06.572100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 16:21:06.572108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 16:21:06.572516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 16:21:06.572524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 16:21:06.572922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 16:21:06.572931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 16:21:06.573423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 16:21:06.573453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 16:21:06.573872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 16:21:06.573882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 16:21:06.574108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 16:21:06.574117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 16:21:06.574515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 16:21:06.574524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 16:21:06.574921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 16:21:06.574929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 16:21:06.575447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 16:21:06.575476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 16:21:06.575883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 16:21:06.575893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 16:21:06.576479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 16:21:06.576508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.576959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.576970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.577483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.577511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.577846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.577856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.578079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.578087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.578323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.578331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.578750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.578757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.579118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.579132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.579412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.579420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.579815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.579823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.580362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.580391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.580794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.580803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.581207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.581216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.581634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.581642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.582055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.582063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.582136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.582147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.582530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.582539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.582934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.582943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.583430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.583459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.583714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.583723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.584128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.584136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.584329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.584340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.584562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.584570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.584962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.584970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-15 16:21:06.585393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-15 16:21:06.585402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.585800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.585808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.586205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.586213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.586436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.586443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.586873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.586881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.587091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.587099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.587463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.587471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.587850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.587858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.588272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.588280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.588752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.588764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.589150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.589160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.589566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.589574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.589998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.590006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.590424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.590432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.590830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.590838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.591361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.591390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.591811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.591820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.592223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.592232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.592645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.592653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.592924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.592933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.593348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.593358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.593616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.593624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.593845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.593852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.594254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.594262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.594685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.594693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.594900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.594910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.595331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.595340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.595737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.595745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.596131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.596140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.596536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.596544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.596800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.596808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.597202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.597210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.597623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.597631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.598020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.598029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.598443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.598451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.598848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.598857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.599319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.599328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.599726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.599736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.600135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.600143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.600370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.600378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.600764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.600773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-15 16:21:06.600836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-15 16:21:06.600844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.601182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.601191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.601365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.601373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.601742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.601750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.602168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.602176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.602391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.602399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.602807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.602816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.603024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.603033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.603403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.603412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.603797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.603806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.604226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.604235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.604687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.604694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.605114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.605134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.605507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.605516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.605914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.605922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.606316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.606324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.606738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.606747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.607178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.607187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.607574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.607583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.607979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.607987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.608197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.608205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.608448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.608456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.608852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.608860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.609259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.609269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.609729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.609737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.609805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.609811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.610170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.610179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.610572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.610580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.610985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.610993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.611412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.611421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.611804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.611813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.612000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.612009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.612395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.612405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.612820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.612828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.613244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.613252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.613663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.613671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.613892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.613900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.614331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.614339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.614762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.614770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.615165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.615173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.038 [2024-07-15 16:21:06.615567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.038 [2024-07-15 16:21:06.615576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.038 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.615832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.615840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.616230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.616238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.616635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.616643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.616969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.616978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.617376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.617384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.617619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.617627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.618097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.618105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.618506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.618514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.618799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.618808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.619203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.619211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.619604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.619612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.619791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.619799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.620182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.620190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.620654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.620663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.621051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.621059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.621327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.621335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.621749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.621757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.621946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.621953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.622306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.622315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.622709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.622718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.623130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.623139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.623547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.623556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.623961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.623970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.624456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.624485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.624906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.624916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.625315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.625344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.625757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.625767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.626196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.626206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.626605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.626613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.626832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.626840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.627029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.627041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.627509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.627518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.627895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.627904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.628297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.628305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.628701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.628709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.629117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.629129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.629344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.629351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.629717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.629724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.630133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.630142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.630541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.039 [2024-07-15 16:21:06.630550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.039 qpair failed and we were unable to recover it. 00:29:31.039 [2024-07-15 16:21:06.630979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.630988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.631359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.631389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.631805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.631814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.632080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.632088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.632494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.632502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.632728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.632735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.633106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.633114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.633533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.633542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.633859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.633867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.634389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.634418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.634623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.634632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.635043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.635051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.635448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.635457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.635856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.635864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.636264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.636272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.636672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.636679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.637062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.637070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.637290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.637298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.637713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.637721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.638139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.638149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.638544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.638552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.638944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.638952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.639394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.639407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.639616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.639624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.640041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.640049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.640259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.640267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.640686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.640694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.640900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.640909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.641303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.641311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.641703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.641711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.641968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.641976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.642313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.642321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.642497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.040 [2024-07-15 16:21:06.642505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.040 qpair failed and we were unable to recover it. 00:29:31.040 [2024-07-15 16:21:06.642804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.642812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.643214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.643222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.643427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.643434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.643799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.643807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.644200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.644208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.644607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.644615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.645011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.645019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.645421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.645429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.645870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.645878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.646266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.646275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.646665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.646674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.646898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.646907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.647302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.647311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.647707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.647715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.648106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.648114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.648353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.648362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.648765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.648774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.649197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.649205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.649287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.649293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.649652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.649660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.650073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.650081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.650497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.650505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.650898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.650906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.651251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.651261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.651511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.651521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.651744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.651753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.652113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.652125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.652535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.652544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.652968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.652975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.653197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.653206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.653447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.653455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.653665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.653673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.654084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.654091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.654489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.654497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.654893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.654901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.655306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.655314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.655727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.655735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.656128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.656136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.656413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.656421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.656818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.656827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.041 [2024-07-15 16:21:06.657249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.041 [2024-07-15 16:21:06.657258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.041 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.657676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.657685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.658080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.658087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.658502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.658510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.658770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.658778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.659163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.659171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.659369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.659377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.659767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.659775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.660201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.660210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.660589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.660597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.660796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.660805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.661162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.661170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.661603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.661610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.662003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.662010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.662402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.662411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.662805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.662814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.663023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.663032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.663368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.663377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.663763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.663772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.664166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.664174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.664554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.664563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.664957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.664965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.665188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.665195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.665577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.665585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.666002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.666010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.666422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.666430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.666823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.666831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.667231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.667240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.667650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.667658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.667880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.667890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.668296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.668305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.668563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.668571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.668992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.669001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.669425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.669433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.669828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.669836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.670131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.670139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.670337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.670345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.670776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.670784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.671069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.671078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.671501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.671509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.671736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.042 [2024-07-15 16:21:06.671744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.042 qpair failed and we were unable to recover it. 00:29:31.042 [2024-07-15 16:21:06.672137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.672145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.672452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.672461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.672863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.672871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.673301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.673309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.673606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.673615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.674025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.674033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.674460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.674469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.674852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.674860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.675066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.675073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.675487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.675496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.675888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.675896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.676316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.676324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.676722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.676730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.677035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.677043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.677454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.677463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.677727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.677736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.678129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.678138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.678538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.678546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.678939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.678947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.679349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.679377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.679591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.679600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.679821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.679830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.680237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.680245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.680638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.680646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.680858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.680866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.681234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.681243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.681426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.681434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.681682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.681690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.681905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.681918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.682116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.682128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.682535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.682544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.682943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.682951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.683167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.683175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.683552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.683560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.683961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.683969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.684374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.684383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.684611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.684619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.684977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.684986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.685215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.685223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.685436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.685444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.685753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.043 [2024-07-15 16:21:06.685761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.043 qpair failed and we were unable to recover it. 00:29:31.043 [2024-07-15 16:21:06.685986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.685995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.686223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.686232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.686589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.686598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.686818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.686826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.686999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.687008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.687434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.687443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.687701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.687709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.688128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.688137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.688534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.688542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.688935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.688943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.689341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.689349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.689767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.689775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.689856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.689862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.690213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.690222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.690630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.690639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.691016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.691025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.691439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.691447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.691840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.691849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.692328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.692336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.692729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.692737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.692948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.692956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.693167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.693177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.693590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.693598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.693806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.693813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.694210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.694218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.694652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.694660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.695059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.695067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.695308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.695318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.695731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.695740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.696156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.696164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.696560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.696568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.697023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.697032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.697283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.697292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.697710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.697718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.698068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.698077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.698337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.698346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.698746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.698755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.044 qpair failed and we were unable to recover it. 00:29:31.044 [2024-07-15 16:21:06.699128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.044 [2024-07-15 16:21:06.699137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.699511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.699519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.699909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.699917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.700313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.700322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.700742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.700750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.701142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.701150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.701366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.701374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.701668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.701676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.701881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.701889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.702079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.702088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.702478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.702486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.702909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.702917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.703234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.703250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.703640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.703648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.703971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.703979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.704274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.704290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.704677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.704685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.705079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.705087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.705474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.705483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.705879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.705887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.706315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.706323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.706405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.706412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.706774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.706782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.707160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.707169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.707563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.707571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.707954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.707962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.708396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.708405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.708628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.708636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.708822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.708830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.709238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.709246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.709428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.709441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.709844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.709852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.710198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.710207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.710414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.710421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.710810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.710818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.711076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.711084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.711471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.711479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.045 [2024-07-15 16:21:06.711897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.045 [2024-07-15 16:21:06.711906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.045 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.712166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.712174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.712550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.712559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.712950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.712958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.713378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.713386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.713684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.713691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.714089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.714097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.714504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.714513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.714711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.714720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.715001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.715009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.715320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.715328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.715722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.715730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.716129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.716137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.716511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.716519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.716782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.716789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.716982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.716991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.717394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.717402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.717704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.717714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.717783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.717791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.718166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.718174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.718707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.718715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.719087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.719096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.719483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.719491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.719888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.719896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.720279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.720289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.720707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.720716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.720928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.720937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.721198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.721205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.721601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.721609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.722028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.722037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.722455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.722464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.722858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.722866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.723259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.723268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.723695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.723706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.724170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.724179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.724421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.724429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.724824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.724833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.725249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.725258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.725695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.725703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.726104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.726112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.726503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.726511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.046 qpair failed and we were unable to recover it. 00:29:31.046 [2024-07-15 16:21:06.726784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.046 [2024-07-15 16:21:06.726792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.727090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.727098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.727296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.727305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.727708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.727717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.727920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.727928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.728330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.728339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.728739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.728748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.729148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.729157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.729529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.729537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.729922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.729930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.730153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.730160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.730549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.730557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.730974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.730982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.731208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.731216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.731498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.731506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.731733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.731740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.732147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.732155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.732557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.732566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.732959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.732967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.733362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.733371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.733699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.733706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.734101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.734109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.734398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.734406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.734828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.734836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.735141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.735149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.735551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.735559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.735954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.735962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.736225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.736233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.736649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.736657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.736862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.736870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.737324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.737332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.737585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.737592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.737977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.737986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.738462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.738471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.738865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.738874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.739308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.739316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.739695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.739703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.740039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.740048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.740443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.740451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.740658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.740666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.740827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.047 [2024-07-15 16:21:06.740835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.047 qpair failed and we were unable to recover it. 00:29:31.047 [2024-07-15 16:21:06.741018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.741027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.741445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.741454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.741654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.741663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.741938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.741946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.742341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.742350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.742803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.742812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.743264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.743273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.743697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.743705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.744098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.744107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.744515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.744523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.744960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.744969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.745267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.745275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.745504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.745511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.745912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.745920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.746316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.746324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.746547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.746555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.746903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.746912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.747295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.747303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.747526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.747534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.747939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.747946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.748170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.748178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.748567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.748575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.748924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.748932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.749324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.749332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.749631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.749640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.750090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.750099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.750373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.750381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.750642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.750650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.751045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.751053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.751450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.751459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.751678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.751687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.752100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.752110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.752582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.752590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.752989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.752997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.753377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.753386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.753765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.753774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.754031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.754039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.754452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.754461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.754772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.754781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.755164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.755173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.048 [2024-07-15 16:21:06.755462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.048 [2024-07-15 16:21:06.755471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.048 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.755775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.755784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.756160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.756168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.756570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.756578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.756978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.756987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.757048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.757056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.757236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.757245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.757605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.757614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.758037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.758045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.758437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.758446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.758850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.758858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.759057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.759065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.759426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.759434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.759627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.759635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.760039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.760048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.760433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.760441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.760648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.760655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.761052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.761060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.761446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.761456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.761856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.761864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.762278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.762286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.762374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.762380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.762733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.762742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.763011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.763019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.763399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.763407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.763689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.763697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.764129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.764137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.764513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.764521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.764914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.764923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.765321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.765329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.765661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.765670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.766070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.766080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.766467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.766475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.766872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.766881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.767276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.767284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.767499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.049 [2024-07-15 16:21:06.767507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.049 qpair failed and we were unable to recover it. 00:29:31.049 [2024-07-15 16:21:06.767744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.767752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.767952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.767959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.768152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.768161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.768301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.768309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.768568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.768576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.769005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.769014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.769429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.769438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.769836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.769845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.770049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.770059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.770452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.770461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.770850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.770859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.771253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.771262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.771664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.771673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.772088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.772097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.772320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.772329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.772637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.772646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.772728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.772736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.773087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.773096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.773512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.773520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.773913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.773921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.774141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.774150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.774558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.774566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.774982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.774990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.775373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.775381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.775774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.775782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.775986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.775994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.776319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.776328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.776722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.776729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.777172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.777181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.777579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.777588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.778010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.778019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.778430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.778438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.778652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.778660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.779069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.779077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.779284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.779292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.779597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.779608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.779988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.779995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.780258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.780266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.780677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.780685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.781077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.781085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.050 qpair failed and we were unable to recover it. 00:29:31.050 [2024-07-15 16:21:06.781474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.050 [2024-07-15 16:21:06.781482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.781874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.781882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.782296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.782304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.782698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.782707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.783100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.783109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.783498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.783506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.783929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.783937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.784430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.784459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.784865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.784876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.785384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.785413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.785834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.785843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.786051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.786059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.786293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.786302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.786526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.786535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.786937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.786945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.787343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.787351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.787689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.787698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.788108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.788115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.788514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.788522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.788918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.788926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.789148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.789156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.789544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.789552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.789973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.789985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.790373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.790382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.790588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.790598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.790818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.790826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.791230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.791238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.791631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.791638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.792032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.792040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.792400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.792408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.792629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.792637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.792940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.792948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.793398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.793407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.793793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.793802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.794216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.794225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.794633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.794642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.795041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.795049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.795445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.795453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.795681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.795689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.796087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.796096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.051 [2024-07-15 16:21:06.796506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 16:21:06.796514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.051 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.796906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.796914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.797331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.797339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.797737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.797745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.798145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.798153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.798553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.798562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.798954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.798962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.799360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.799369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.799760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.799768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.800173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.800181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.800400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.800408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.800812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.800819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.801213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.801222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.801609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.801617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.801823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.801830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.802198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.802206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.802620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.802628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.802827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.802835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.803016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.803025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.803416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.803424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.803823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.803831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.804247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.804255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.804514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.804524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.804917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.804925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.805134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.805141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.805541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.805549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.805811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.805820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.806217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.806225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.806631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.806639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.806859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.806867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.807270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.807283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.807677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.807685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.808080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.808088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.808475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.808483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.808911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.808919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.809330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.809339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.809735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.809744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.810138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.810147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.810562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.810569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.810891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.810900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.811308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 16:21:06.811316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.052 qpair failed and we were unable to recover it. 00:29:31.052 [2024-07-15 16:21:06.811713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.811721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.812174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.812182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.812616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.812625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.812846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.812854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.813047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.813056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.813436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.813444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.813716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.813723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.814010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.814018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.814437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.814446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.814765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.814773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.814836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.814843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.815200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.815208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.815433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.815441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.815734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.815742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.815998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.816006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.816496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.816505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.816900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.816908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.817105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.817113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.817545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.817555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.817947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.817956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.818369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.818378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.818664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.818674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.819088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.819096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.819483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.819492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.819887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.819895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.820377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.820406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.820828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.820838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.821239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.821247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.821642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.821650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.822128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.822138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.822514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.822523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.822918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.822927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.823421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.823450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.823853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.823863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.824376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 16:21:06.824405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.053 qpair failed and we were unable to recover it. 00:29:31.053 [2024-07-15 16:21:06.824895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.824905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.825411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.825440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.825843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.825852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.826324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.826353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.826763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.826773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.827170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.827180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.827589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.827597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.828017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.828025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.828245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.828253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.828559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.828567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.828964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.828972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.829229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.829237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.829550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.829558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.829787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.829795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.830151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.830159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.830589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.830597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.830988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.830996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.831219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.831227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.831312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.831321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.831676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.831684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.832098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.832107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.832510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.832519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.832904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.832913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.833308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.833317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.833615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.833623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.834018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.834026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.834436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.834446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.834839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.834847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.835068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.835077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.835435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.835443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.835838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.835846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.836139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.836147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.836512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.836520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.836814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.836823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.837028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.837035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.837429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.837438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.837854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.837863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.838070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.838078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.838484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.838492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.838887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.838895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.054 [2024-07-15 16:21:06.839322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.054 [2024-07-15 16:21:06.839330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.054 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.839725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.839733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.840131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.840138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.840514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.840524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.840782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.840791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.841083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.841090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.841481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.841490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.841889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.841898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.842312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.842320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.842720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.842729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.843127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.843136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.843532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.843540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.843953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.843961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.844463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.844492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.844906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.844915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.845129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.845137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.845424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.845452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.845854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.845864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.846387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.846417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.846866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.846875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.847415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.847444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.847937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.847947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.848433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.848462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.848686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.848695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.849089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.849097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.849502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.849510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.849907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.849918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.850306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.850335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.850733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.850744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.851147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.851156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.851562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.851570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.851963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.851972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.852361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.852370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.852764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.852772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.853098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.853106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.853323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.853331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.853680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.853688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.854079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.854087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.854483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.854491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.854875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.854884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.855308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.855317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.055 [2024-07-15 16:21:06.855732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.055 [2024-07-15 16:21:06.855741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.055 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.856135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.856143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.856404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.856412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.856806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.856815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.857210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.857219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.857611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.857619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.857874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.857883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.858299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.858307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.858563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.858570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.858965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.858973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.859374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.859382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.859683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.859692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.860093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.860101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.860309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.860316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.860580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.860587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.861002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.861010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.861431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.861439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.861644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.861653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.862066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.862074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.862491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.862499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.862894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.862902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.863282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.863290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.863497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.863505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.863856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.863865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.864259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.864268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.864695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.864706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.865149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.865157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.865513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.865521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.056 [2024-07-15 16:21:06.865943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.056 [2024-07-15 16:21:06.865951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.056 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-15 16:21:06.866346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-15 16:21:06.866355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-15 16:21:06.866756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-15 16:21:06.866765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-15 16:21:06.867179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-15 16:21:06.867188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-15 16:21:06.867584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-15 16:21:06.867593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-15 16:21:06.867849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-15 16:21:06.867857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-15 16:21:06.868259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-15 16:21:06.868267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-15 16:21:06.868662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-15 16:21:06.868671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-15 16:21:06.869063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-15 16:21:06.869073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-15 16:21:06.869472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-15 16:21:06.869482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-15 16:21:06.869699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-15 16:21:06.869708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-15 16:21:06.870067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-15 16:21:06.870076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-15 16:21:06.870294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-15 16:21:06.870303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-15 16:21:06.870688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-15 16:21:06.870696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-15 16:21:06.870923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-15 16:21:06.870931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.871332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.871340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.871716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.871724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.871945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.871954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.872150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.872161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.872499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.872509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.872901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.872910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.873308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.873317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.873700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.873708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.874126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.874135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.874429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.874437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.874726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.874734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.875130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.875138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.875534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.875542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.875874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.875891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.876276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.876284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.876672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.876680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.876971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.876980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.877210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.877218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.877433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.877441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.877619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.877628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.878032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.878041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.878450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.878458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.878850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.878860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.879255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.879263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.879682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.879690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.880077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.880085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.880508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.880516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.880908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.880916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.881331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.881340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.881727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.881736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.881945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.881954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.882324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.882333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.882770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.882778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.883081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.883090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.883473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.883480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.883881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.883889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.884375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.884404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.884805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.884815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.885228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.885236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.885506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.885514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-15 16:21:06.885892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-15 16:21:06.885900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.886296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.886305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.886734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.886743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.886952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.886962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.887388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.887396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.887808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.887816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.888210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.888219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.888613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.888622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.888879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.888887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.889113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.889130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.889321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.889331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.889728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.889737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.890177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.890185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.890474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.890482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.890831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.890840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.891242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.891250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.891508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.891516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.891908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.891916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.892317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.892326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.892728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.892736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.892992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.893001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.893431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.893440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.893663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.893673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.894063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.894071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.894456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.894465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.894858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.894865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.895257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.895266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.895648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.895657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.895965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.895974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.896373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.896382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.896790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.896798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.897055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.897063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.897284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.897292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.897727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.897735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.898129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.898137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.898534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.898542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.898960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.898968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.899455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.899484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.899693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.899703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.900072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.900080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-15 16:21:06.900503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-15 16:21:06.900512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.900910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.900919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.901442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.901471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.901728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.901737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.901936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.901944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.902155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.902163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.902587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.902598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.902992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.903000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.903386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.903395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.903813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.903824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.904040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.904048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.904448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.904457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.904666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.904675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.905129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.905139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.905417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.905428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.905819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.905828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.906040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.906048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.906495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.906504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.906932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.906940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.907154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.907162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.907581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.907590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.907799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.907807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.908213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.908224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.908631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.908640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.909055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.909064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.909355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.909365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.909574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.909582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.909855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.909864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.910085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.910094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.910505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.910513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.910692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.910702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.910908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.910918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.911291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.911299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.911679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.911687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.912077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.912086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.912487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.912495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.912921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.912929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.913322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.913332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.913618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.913627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.914034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.914043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.914425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-15 16:21:06.914435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-15 16:21:06.914834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.914842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.915327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.915335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.915720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.915727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.916146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.916154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.916425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.916433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.916733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.916741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.917135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.917144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.917539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.917547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.917752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.917762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.917983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.917992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.918199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.918208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.918634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.918642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.919058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.919066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.919273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.919281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.919487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.919496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.919947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.919955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.920346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.920354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.920753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.920761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.921021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.921029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.921219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.921228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.921679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.921687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.922085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.922095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.922492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.922500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.922876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.922884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.923303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.923312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.923525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.923533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.923940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.923949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.924342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.924351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.924765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.924773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.924983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.924991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.925392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.925401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.925795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.925803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.926011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.926019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-15 16:21:06.926401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-15 16:21:06.926410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.926876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.926884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.927280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.927289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.927709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.927717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.928111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.928119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.928562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.928571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.928966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.928974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.929371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.929398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.929804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.929813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.930308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.930337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.930744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.930753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.931174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.931182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.931580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.931588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.931849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.931857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.932308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.932316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.932712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.932720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.933113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.933126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.933522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.933531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.933924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.933932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.934285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.934314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.934722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.934732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.935135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.935144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.935328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.935336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.935740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.935749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.936146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.936155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.936560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.936569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.936774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.936781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.936990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.936998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.937221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.937233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.937673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.937682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.938069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.938078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.938508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.938516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.938897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.938906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.939332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.939340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.939735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.939743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.940162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.940170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.940395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.940402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.940807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.940816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.941212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.941221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.941566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-15 16:21:06.941575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-15 16:21:06.941968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.941976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.942372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.942380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.942776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.942785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.942853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.942864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.942948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.942955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.943240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.943248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.943552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.943560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.943984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.943993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.944146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.944154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.944459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.944467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.944854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.944863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.945261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.945270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.945682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.945691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.946086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.946095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.946316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.946324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.946723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.946731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.947128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.947136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.947317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.947325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.947693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.947701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.948095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.948103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.948513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.948523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.948729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.948736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.949095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.949104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.949498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.949507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.949926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.949935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.950362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.950370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.950654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.950663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.950908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.950916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.951297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.951307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.951699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.951707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.952152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.952160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.952465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.952474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.952872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.952880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.953086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.953093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.953292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.953302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.953701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.953709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.954135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.954144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.954327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.954335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.954578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.954586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.954982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.954990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-15 16:21:06.955375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-15 16:21:06.955383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.955802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.955810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.956018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.956026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.956445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.956453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.956971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.956979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.957179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.957188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.957576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.957584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.957791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.957798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.958185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.958193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.958461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.958468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.958864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.958873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.959352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.959361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.959746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.959755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.960179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.960187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.960581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.960589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.960865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.960873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.961094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.961102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.961495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.961504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.961896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.961905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.962324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.962333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.962708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.962716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.963140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.963148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.963547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.963555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.963990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.963998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.964416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.964424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.964841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.964850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.965080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.965088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.965534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.965542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.965937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.965947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.966458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.966487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.966892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.966903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.967432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.967461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.967870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.967879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.968367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.968395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.968733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.968742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.969149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.969159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.969603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.969612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.969869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.969877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.970274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.970283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.970505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.970512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.970903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-15 16:21:06.970911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-15 16:21:06.971334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.971343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.971740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.971749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.972147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.972155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.972557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.972566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.972831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.972839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.973045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.973054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.973464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.973472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.973866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.973874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.974291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.974300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.974722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.974730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.975133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.975143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.975503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.975511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.975718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.975726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.976162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.976170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.976594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.976602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.976998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.977007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.977387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.977396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.977793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.977801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.978220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.978228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.978632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.978641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.979056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.979065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.979462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.979470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.979870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.979879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.980327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.980336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.980754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.980762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.981158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.981166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.981469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.981477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.981699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.981707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.982091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.982099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.982323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.982332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.982731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.982740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.983132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.983141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.983532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.983542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.983801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.983810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.984070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.984078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.984162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.984170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.984433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.984441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.984855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.984863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.985256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.985264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.985687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.985695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.986135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-15 16:21:06.986144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-15 16:21:06.986533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.986541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.986873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.986882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.987269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.987277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.987677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.987685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.988102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.988110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.988509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.988517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.988919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.988927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.989322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.989330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.989550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.989558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.989957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.989966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.990391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.990401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.990660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.990669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.991085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.991094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.991479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.991489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.991705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.991713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.992102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.992111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.992556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.992565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.992963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.992971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.993323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.993352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.993767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.993778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.994200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.994209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.994684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.994692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.995114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.995128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.995574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.995583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.995995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.996004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.996370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.996399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.996792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.996803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.997018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.997027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.997450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.997459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.997900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.997909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.998403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.998433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.998836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.998846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.999267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.999276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:06.999702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:06.999710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:07.000107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-15 16:21:07.000116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-15 16:21:07.000511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.000520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.000906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.000915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.001428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.001457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.001755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.001765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.002171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.002181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.002619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.002627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.003021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.003030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.003450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.003459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.003851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.003860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.004274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.004283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.004686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.004694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.005099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.005108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.005494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.005504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.005915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.005924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.006458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.006487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.006935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.006945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.007450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.007479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.007903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.007913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.008457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.008490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.008892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.008902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.009406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.009435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.009856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.009865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.010360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.010389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.010613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.010623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.011033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.011042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.011399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.011409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.011661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.011669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.012067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.012075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.012463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.012472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.012892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.012901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.013168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.013177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.013587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.013595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.014003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.014013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.014399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.014408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.014806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.014815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.015222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.015230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.015632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.015640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.016056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.016065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.016460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.016469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.016867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.016876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-15 16:21:07.017273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-15 16:21:07.017281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.017656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.017664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.018081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.018090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.018486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.018496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.018892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.018902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.019323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.019332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.019728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.019736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.019942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.019952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.020163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.020172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.020599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.020608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.021006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.021015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.021431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.021440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.021881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.021890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.022095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.022105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.022493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.022502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.022917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.022926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.023444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.023473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.023896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.023906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.024356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.024389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.024794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.024804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.025209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.025218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.025605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.025613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.026091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.026100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.026496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.026504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.026898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.026908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.027117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.027130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.027310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.027318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.027721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.027729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.028125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.028134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.028381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.028389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.028650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.028658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.029082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.029091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.029317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.029326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.029725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.029734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.030111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.030120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.030534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.030543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.030938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.030947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.031346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.031375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.031861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.031871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.032095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.032103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.032510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-15 16:21:07.032519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-15 16:21:07.032943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.032952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.033337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.033366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.033770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.033780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.034005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.034014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.034411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.034420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.034814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.034823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.035219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.035228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.035630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.035638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.036093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.036101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.036505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.036514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.036739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.036747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.037151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.037160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.037549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.037558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.037954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.037962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.038143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.038154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.038511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.038521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.038979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.038988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.039482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.039513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.039720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.039729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.039990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.039998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.040201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.040210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.040587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.040596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.040999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.041007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.041426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.041434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.041643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.041652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.042061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.042069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.042467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.042475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.042877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.042885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.043095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.043103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.043288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.043298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.043746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.043755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.043965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.043974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.044342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.044350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.044764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.044772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.045169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.045177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.045570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.045579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.045975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.045985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.046431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.046439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.046859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.046867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.047379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.047408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.047811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-15 16:21:07.047820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-15 16:21:07.048147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.048156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.048566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.048574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.048981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.048989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.049379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.049387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.049842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.049851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.050109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.050117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.050537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.050545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.050940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.050950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.051462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.051491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.051724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.051733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.051961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.051969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.052389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.052398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.052822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.052830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.053317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.053346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.053754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.053764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.054188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.054196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.054584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.054595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.055000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.055008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.055429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-15 16:21:07.055438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-15 16:21:07.055660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.055669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-15 16:21:07.055895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.055904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-15 16:21:07.056276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.056286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-15 16:21:07.056512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.056521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-15 16:21:07.056920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.056928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-15 16:21:07.057212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.057220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-15 16:21:07.057428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.057436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-15 16:21:07.057631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.057641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-15 16:21:07.057998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.058007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-15 16:21:07.058394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.058403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-15 16:21:07.058802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.058810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-15 16:21:07.059003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.059011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-15 16:21:07.059407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.059415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-15 16:21:07.059833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.059842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-15 16:21:07.060235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.060243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-15 16:21:07.060539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.060548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-15 16:21:07.060947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.060955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-15 16:21:07.061161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-15 16:21:07.061169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.061564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.061572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.061962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.061970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.062240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.062249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.062432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.062440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.062687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.062695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.063082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.063091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.063485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.063493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.063887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.063895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.064297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.064305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.064701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.064709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.065107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.065115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.065533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.065541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.065746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.065753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.066008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.066015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.066413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.066422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.066832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.066840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.067280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.067289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.067676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.067684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.068080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.068088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.068502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.068511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.068927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.068935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.069330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.069339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.069743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.069751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.070206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.070214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.070559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.070568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.070790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.070799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.071249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.071258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.071540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.071548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.071881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.071889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.072275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.072283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.072693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.072701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.073118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.073129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.073538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.073548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.073944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.073952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.074369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.074398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.074823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.074833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.075240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.075248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.075527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.075535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.075931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.075939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.076212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.076220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-15 16:21:07.076545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-15 16:21:07.076553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:31.346 [2024-07-15 16:21:07.076779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.076787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:31.346 [2024-07-15 16:21:07.077208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.077217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:31.346 [2024-07-15 16:21:07.077489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.077498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:31.346 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.346 [2024-07-15 16:21:07.077931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.077940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.078342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.078351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.078574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.078583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.078994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.079002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.079405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.079413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.079816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.079825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.080192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.080200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.080490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.080498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.080714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.080722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.081132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.081140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.081538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.081549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.081974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.081984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.082375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.082384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.082785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.082799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.083216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.083225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.083656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.083664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.083923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.083931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.084189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.084197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.084625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.084633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.085048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.085057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.085361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.085369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.085656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.085664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.086083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.086092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.086569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.086579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.087014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.087023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.087426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.087435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.087852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.087860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.088172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.088182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.088583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.088591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.088849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.088858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.089292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.089301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.089719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.089729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.090169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.090178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.090580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.090588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.091005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-15 16:21:07.091013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-15 16:21:07.091416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.091425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.091686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.091694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.092053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.092062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.092457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.092466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.092882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.092891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.093253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.093262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.093698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.093706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.094006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.094015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.094283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.094291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.094688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.094696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.095112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.095120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.095470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.095478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.095732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.095741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.096130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.096139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.096511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.096520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.096940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.096948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.097461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.097491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.097899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.097909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.098367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.098399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.098794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.098803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.099028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.099036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.099412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.099421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.099645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.099652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.099921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.099930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.100330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.100338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.100735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.100744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.101138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.101148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.101342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.101349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.101754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.101763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.102156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.102164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.102561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.102570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.102831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.102839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.103266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.103275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.103679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.103687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.104080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.104088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.104389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.104398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.104781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.104789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.105181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.105189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.105589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.105597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.105999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.106007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-15 16:21:07.106216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-15 16:21:07.106223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.106499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.106508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.106902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.106911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.107115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.107129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.107525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.107533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.107929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.107938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.108180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.108188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.108633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.108642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.109032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.109041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.109438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.109446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.109864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.109872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.110267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.110276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.110686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.110695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.111079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.111087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.111486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.111495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.111931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.111939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.112449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.112479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.112889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.112900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.113496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.113528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.113937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.113947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.114369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.114378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.114775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.114785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.115335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.115364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.348 [2024-07-15 16:21:07.115774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.115785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:31.348 [2024-07-15 16:21:07.116204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.116214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.348 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.348 [2024-07-15 16:21:07.116606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.116617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.117014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.117023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.117436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.117445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.117864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.117873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.118295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.118304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.118729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.118739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.119155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.119171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.119559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.119568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.119967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-15 16:21:07.119976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-15 16:21:07.120198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.120206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.120589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.120597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.120854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.120863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.121258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.121266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.121532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.121540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.121925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.121933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.122352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.122360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.122651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.122660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.123056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.123064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.123467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.123477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.123857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.123865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.124262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.124270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.124669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.124677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.125116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.125135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.125510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.125518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.125918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.125927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.126454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.126484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.126889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.126899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.127427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.127456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.127757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.127768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.128166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.128174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.128564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.128572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.128943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.128951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.129329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.129338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.129746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.129754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.129961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.129968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.130263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.130272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.130681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.130689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.131084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.131092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.131514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.131522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.131956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.131964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 Malloc0 00:29:31.349 [2024-07-15 16:21:07.132367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.132396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.132807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.132817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.349 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:31.349 [2024-07-15 16:21:07.133336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.133366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.349 [2024-07-15 16:21:07.133598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.133608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.349 [2024-07-15 16:21:07.133885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.133896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.134324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.134333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.134770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.134779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.135004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-15 16:21:07.135012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-15 16:21:07.135432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.135441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.135656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.135666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.135783] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.350 [2024-07-15 16:21:07.136081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.136089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.136495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.136504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.136764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.136772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.136996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.137005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.137411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.137420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.137678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.137686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.137914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.137924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.138134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.138142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.138547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.138556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.139023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.139031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.139315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.139323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.139556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.139564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.139959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.139968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.140412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.140421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.140847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.140855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.141246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.141255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.141458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.141465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.141818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.141826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.142243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.142251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.142530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.142538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.142936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.142943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.143250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.143260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.143690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.143698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.144103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.144111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.144522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.144531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.350 [2024-07-15 16:21:07.144796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.144805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:31.350 [2024-07-15 16:21:07.145021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.145031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.145256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.145264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.350 [2024-07-15 16:21:07.145529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.145539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.350 [2024-07-15 16:21:07.145921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.145929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.146335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.146345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.146710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.146721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.147120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.147133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.147588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.147595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.148030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.148039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.148236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.148244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-15 16:21:07.148431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-15 16:21:07.148439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.148643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.148650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.148832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.148842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.149251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.149260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.149659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.149667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.150059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.150067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.150462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.150471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.150887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.150896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.151295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.151304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.151580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.151588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.152008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.152016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.152314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.152323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.152729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.152737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.351 [2024-07-15 16:21:07.152945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.152953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:31.351 [2024-07-15 16:21:07.153370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.153380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.351 [2024-07-15 16:21:07.153650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.153659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.351 [2024-07-15 16:21:07.153884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.153893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.154318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.154328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.154550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.154559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.154896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.154904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.155236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.155249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.155666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.155674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.155957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.155965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.156153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.156163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.156576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.156585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.156989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.156996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.157390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.157398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.157783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.157791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.158053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.158062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-15 16:21:07.158151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-15 16:21:07.158159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f724c000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Write completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Write completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Write completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Write completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Write completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Write completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Write completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Write completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Write completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Write completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Write completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Read completed with error (sct=0, sc=8) 00:29:31.351 starting I/O failed 00:29:31.351 Write completed with error (sct=0, sc=8) 00:29:31.352 starting I/O failed 00:29:31.352 Read completed with error (sct=0, sc=8) 00:29:31.352 starting I/O failed 00:29:31.352 [2024-07-15 16:21:07.158885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.352 [2024-07-15 16:21:07.159254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.352 [2024-07-15 16:21:07.159300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7254000b90 with addr=10.0.0.2, port=4420 00:29:31.352 qpair failed and we were unable to recover it. 00:29:31.614 [2024-07-15 16:21:07.159903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.614 [2024-07-15 16:21:07.159991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7254000b90 with addr=10.0.0.2, port=4420 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 [2024-07-15 16:21:07.160565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.614 [2024-07-15 16:21:07.160606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7254000b90 with addr=10.0.0.2, port=4420 00:29:31.614 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 [2024-07-15 16:21:07.160887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.614 [2024-07-15 16:21:07.160918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7254000b90 with addr=10.0.0.2, port=4420 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.614 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.614 [2024-07-15 16:21:07.161381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.614 [2024-07-15 16:21:07.161414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7254000b90 with addr=10.0.0.2, port=4420 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.614 [2024-07-15 16:21:07.161857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.614 [2024-07-15 16:21:07.161887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7254000b90 with addr=10.0.0.2, port=4420 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 [2024-07-15 16:21:07.162344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.614 [2024-07-15 16:21:07.162380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7254000b90 with addr=10.0.0.2, port=4420 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 [2024-07-15 16:21:07.162690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.614 [2024-07-15 16:21:07.162721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7254000b90 with addr=10.0.0.2, port=4420 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 [2024-07-15 16:21:07.162977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.614 [2024-07-15 16:21:07.163017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7254000b90 with addr=10.0.0.2, port=4420 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 [2024-07-15 16:21:07.163460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.614 [2024-07-15 16:21:07.163493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7254000b90 with addr=10.0.0.2, port=4420 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 [2024-07-15 16:21:07.163813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.614 [2024-07-15 16:21:07.163842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7254000b90 with addr=10.0.0.2, port=4420 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 [2024-07-15 16:21:07.164016] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.614 [2024-07-15 16:21:07.166466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.614 [2024-07-15 16:21:07.166600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.614 [2024-07-15 16:21:07.166648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.614 [2024-07-15 16:21:07.166670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.614 [2024-07-15 16:21:07.166689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.614 [2024-07-15 16:21:07.166737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.614 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:31.614 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.614 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.614 [2024-07-15 16:21:07.176445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.615 [2024-07-15 16:21:07.176640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.615 [2024-07-15 16:21:07.176681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.615 [2024-07-15 16:21:07.176700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.615 [2024-07-15 16:21:07.176717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.615 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.615 [2024-07-15 16:21:07.176756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.615 qpair failed and we were unable to recover it. 00:29:31.615 16:21:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2477026 00:29:31.615 [2024-07-15 16:21:07.186508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.615 [2024-07-15 16:21:07.186663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.615 [2024-07-15 16:21:07.186708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.615 [2024-07-15 16:21:07.186725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.615 [2024-07-15 16:21:07.186749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.615 [2024-07-15 16:21:07.186787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.615 qpair failed and we were unable to recover it. 00:29:31.615 [2024-07-15 16:21:07.196304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.615 [2024-07-15 16:21:07.196415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.615 [2024-07-15 16:21:07.196439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.615 [2024-07-15 16:21:07.196449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.615 [2024-07-15 16:21:07.196459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.615 [2024-07-15 16:21:07.196481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.615 qpair failed and we were unable to recover it. 00:29:31.615 [2024-07-15 16:21:07.206374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.615 [2024-07-15 16:21:07.206463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.615 [2024-07-15 16:21:07.206480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.615 [2024-07-15 16:21:07.206488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.615 [2024-07-15 16:21:07.206495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.615 [2024-07-15 16:21:07.206510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.615 qpair failed and we were unable to recover it. 00:29:31.615 [2024-07-15 16:21:07.216398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.615 [2024-07-15 16:21:07.216483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.615 [2024-07-15 16:21:07.216500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.615 [2024-07-15 16:21:07.216508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.615 [2024-07-15 16:21:07.216514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.615 [2024-07-15 16:21:07.216530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.615 qpair failed and we were unable to recover it. 00:29:31.615 [2024-07-15 16:21:07.226457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.615 [2024-07-15 16:21:07.226571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.615 [2024-07-15 16:21:07.226589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.615 [2024-07-15 16:21:07.226597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.615 [2024-07-15 16:21:07.226604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.615 [2024-07-15 16:21:07.226620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.615 qpair failed and we were unable to recover it. 00:29:31.615 [2024-07-15 16:21:07.236398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.615 [2024-07-15 16:21:07.236582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.615 [2024-07-15 16:21:07.236600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.615 [2024-07-15 16:21:07.236607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.615 [2024-07-15 16:21:07.236613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.615 [2024-07-15 16:21:07.236630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.615 qpair failed and we were unable to recover it. 00:29:31.615 [2024-07-15 16:21:07.246498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.615 [2024-07-15 16:21:07.246587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.615 [2024-07-15 16:21:07.246604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.615 [2024-07-15 16:21:07.246612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.615 [2024-07-15 16:21:07.246618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.615 [2024-07-15 16:21:07.246634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.615 qpair failed and we were unable to recover it. 00:29:31.615 [2024-07-15 16:21:07.256497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.615 [2024-07-15 16:21:07.256585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.615 [2024-07-15 16:21:07.256602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.615 [2024-07-15 16:21:07.256609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.615 [2024-07-15 16:21:07.256616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.615 [2024-07-15 16:21:07.256631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.615 qpair failed and we were unable to recover it. 00:29:31.615 [2024-07-15 16:21:07.266554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.615 [2024-07-15 16:21:07.266672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.615 [2024-07-15 16:21:07.266689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.615 [2024-07-15 16:21:07.266697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.615 [2024-07-15 16:21:07.266704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.615 [2024-07-15 16:21:07.266722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.615 qpair failed and we were unable to recover it. 00:29:31.615 [2024-07-15 16:21:07.276440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.615 [2024-07-15 16:21:07.276530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.615 [2024-07-15 16:21:07.276547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.615 [2024-07-15 16:21:07.276560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.615 [2024-07-15 16:21:07.276566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.615 [2024-07-15 16:21:07.276582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.615 qpair failed and we were unable to recover it. 00:29:31.615 [2024-07-15 16:21:07.286620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.615 [2024-07-15 16:21:07.286715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.615 [2024-07-15 16:21:07.286732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.615 [2024-07-15 16:21:07.286740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.615 [2024-07-15 16:21:07.286746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.615 [2024-07-15 16:21:07.286762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.615 qpair failed and we were unable to recover it. 00:29:31.615 [2024-07-15 16:21:07.296603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.615 [2024-07-15 16:21:07.296707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.615 [2024-07-15 16:21:07.296725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.615 [2024-07-15 16:21:07.296733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.615 [2024-07-15 16:21:07.296739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.615 [2024-07-15 16:21:07.296755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.615 qpair failed and we were unable to recover it. 00:29:31.615 [2024-07-15 16:21:07.306537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.615 [2024-07-15 16:21:07.306617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.615 [2024-07-15 16:21:07.306634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.615 [2024-07-15 16:21:07.306644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.615 [2024-07-15 16:21:07.306650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.615 [2024-07-15 16:21:07.306666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.615 qpair failed and we were unable to recover it. 00:29:31.615 [2024-07-15 16:21:07.316661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.616 [2024-07-15 16:21:07.316748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.616 [2024-07-15 16:21:07.316765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.616 [2024-07-15 16:21:07.316773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.616 [2024-07-15 16:21:07.316779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.616 [2024-07-15 16:21:07.316795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.616 qpair failed and we were unable to recover it. 00:29:31.616 [2024-07-15 16:21:07.326706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.616 [2024-07-15 16:21:07.326820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.616 [2024-07-15 16:21:07.326838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.616 [2024-07-15 16:21:07.326846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.616 [2024-07-15 16:21:07.326853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.616 [2024-07-15 16:21:07.326868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.616 qpair failed and we were unable to recover it. 00:29:31.616 [2024-07-15 16:21:07.336724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.616 [2024-07-15 16:21:07.336818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.616 [2024-07-15 16:21:07.336838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.616 [2024-07-15 16:21:07.336846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.616 [2024-07-15 16:21:07.336853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.616 [2024-07-15 16:21:07.336870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.616 qpair failed and we were unable to recover it. 00:29:31.616 [2024-07-15 16:21:07.346713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.616 [2024-07-15 16:21:07.346819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.616 [2024-07-15 16:21:07.346840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.616 [2024-07-15 16:21:07.346849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.616 [2024-07-15 16:21:07.346856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.616 [2024-07-15 16:21:07.346872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.616 qpair failed and we were unable to recover it. 00:29:31.616 [2024-07-15 16:21:07.356784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.616 [2024-07-15 16:21:07.356887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.616 [2024-07-15 16:21:07.356918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.616 [2024-07-15 16:21:07.356927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.616 [2024-07-15 16:21:07.356935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.616 [2024-07-15 16:21:07.356956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.616 qpair failed and we were unable to recover it. 00:29:31.616 [2024-07-15 16:21:07.366887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.616 [2024-07-15 16:21:07.367014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.616 [2024-07-15 16:21:07.367051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.616 [2024-07-15 16:21:07.367060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.616 [2024-07-15 16:21:07.367068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.616 [2024-07-15 16:21:07.367090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.616 qpair failed and we were unable to recover it. 00:29:31.616 [2024-07-15 16:21:07.376776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.616 [2024-07-15 16:21:07.376875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.616 [2024-07-15 16:21:07.376898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.616 [2024-07-15 16:21:07.376907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.616 [2024-07-15 16:21:07.376915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.616 [2024-07-15 16:21:07.376935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.616 qpair failed and we were unable to recover it. 00:29:31.616 [2024-07-15 16:21:07.386892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.616 [2024-07-15 16:21:07.386992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.616 [2024-07-15 16:21:07.387014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.616 [2024-07-15 16:21:07.387022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.616 [2024-07-15 16:21:07.387029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.616 [2024-07-15 16:21:07.387048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.616 qpair failed and we were unable to recover it. 00:29:31.616 [2024-07-15 16:21:07.396911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.616 [2024-07-15 16:21:07.397003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.616 [2024-07-15 16:21:07.397025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.616 [2024-07-15 16:21:07.397034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.616 [2024-07-15 16:21:07.397041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.616 [2024-07-15 16:21:07.397059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.616 qpair failed and we were unable to recover it. 00:29:31.616 [2024-07-15 16:21:07.406967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.616 [2024-07-15 16:21:07.407063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.616 [2024-07-15 16:21:07.407085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.616 [2024-07-15 16:21:07.407095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.616 [2024-07-15 16:21:07.407103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.616 [2024-07-15 16:21:07.407139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.616 qpair failed and we were unable to recover it. 00:29:31.616 [2024-07-15 16:21:07.417075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.616 [2024-07-15 16:21:07.417179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.616 [2024-07-15 16:21:07.417203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.616 [2024-07-15 16:21:07.417211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.616 [2024-07-15 16:21:07.417218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.616 [2024-07-15 16:21:07.417236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.616 qpair failed and we were unable to recover it. 00:29:31.616 [2024-07-15 16:21:07.426984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.616 [2024-07-15 16:21:07.427087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.616 [2024-07-15 16:21:07.427111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.616 [2024-07-15 16:21:07.427120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.616 [2024-07-15 16:21:07.427134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.616 [2024-07-15 16:21:07.427153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.616 qpair failed and we were unable to recover it. 00:29:31.616 [2024-07-15 16:21:07.437086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.616 [2024-07-15 16:21:07.437189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.616 [2024-07-15 16:21:07.437214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.616 [2024-07-15 16:21:07.437224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.616 [2024-07-15 16:21:07.437231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.616 [2024-07-15 16:21:07.437250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.616 qpair failed and we were unable to recover it. 00:29:31.616 [2024-07-15 16:21:07.447200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.616 [2024-07-15 16:21:07.447304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.616 [2024-07-15 16:21:07.447329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.616 [2024-07-15 16:21:07.447340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.616 [2024-07-15 16:21:07.447347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.616 [2024-07-15 16:21:07.447369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.616 qpair failed and we were unable to recover it. 00:29:31.879 [2024-07-15 16:21:07.457148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.879 [2024-07-15 16:21:07.457257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.879 [2024-07-15 16:21:07.457289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.879 [2024-07-15 16:21:07.457298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.879 [2024-07-15 16:21:07.457305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.879 [2024-07-15 16:21:07.457327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.879 qpair failed and we were unable to recover it. 00:29:31.879 [2024-07-15 16:21:07.467130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.879 [2024-07-15 16:21:07.467251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.879 [2024-07-15 16:21:07.467277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.879 [2024-07-15 16:21:07.467287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.879 [2024-07-15 16:21:07.467295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.879 [2024-07-15 16:21:07.467315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.879 qpair failed and we were unable to recover it. 00:29:31.879 [2024-07-15 16:21:07.477200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.879 [2024-07-15 16:21:07.477311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.879 [2024-07-15 16:21:07.477337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.879 [2024-07-15 16:21:07.477350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.879 [2024-07-15 16:21:07.477357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.879 [2024-07-15 16:21:07.477377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.879 qpair failed and we were unable to recover it. 00:29:31.879 [2024-07-15 16:21:07.487242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.879 [2024-07-15 16:21:07.487349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.879 [2024-07-15 16:21:07.487375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.879 [2024-07-15 16:21:07.487384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.879 [2024-07-15 16:21:07.487393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.879 [2024-07-15 16:21:07.487413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.879 qpair failed and we were unable to recover it. 00:29:31.879 [2024-07-15 16:21:07.497223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.879 [2024-07-15 16:21:07.497315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.879 [2024-07-15 16:21:07.497339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.879 [2024-07-15 16:21:07.497349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.879 [2024-07-15 16:21:07.497357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.879 [2024-07-15 16:21:07.497383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.879 qpair failed and we were unable to recover it. 00:29:31.879 [2024-07-15 16:21:07.507259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.879 [2024-07-15 16:21:07.507356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.879 [2024-07-15 16:21:07.507382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.879 [2024-07-15 16:21:07.507391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.879 [2024-07-15 16:21:07.507399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.879 [2024-07-15 16:21:07.507419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.879 qpair failed and we were unable to recover it. 00:29:31.879 [2024-07-15 16:21:07.517315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.879 [2024-07-15 16:21:07.517412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.879 [2024-07-15 16:21:07.517439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.879 [2024-07-15 16:21:07.517448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.879 [2024-07-15 16:21:07.517455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.879 [2024-07-15 16:21:07.517475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.879 qpair failed and we were unable to recover it. 00:29:31.879 [2024-07-15 16:21:07.527342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.879 [2024-07-15 16:21:07.527450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.879 [2024-07-15 16:21:07.527477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.879 [2024-07-15 16:21:07.527487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.879 [2024-07-15 16:21:07.527494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.879 [2024-07-15 16:21:07.527515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.879 qpair failed and we were unable to recover it. 00:29:31.879 [2024-07-15 16:21:07.537356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.879 [2024-07-15 16:21:07.537453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.879 [2024-07-15 16:21:07.537481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.879 [2024-07-15 16:21:07.537493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.879 [2024-07-15 16:21:07.537502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.879 [2024-07-15 16:21:07.537525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.879 qpair failed and we were unable to recover it. 00:29:31.879 [2024-07-15 16:21:07.547412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.879 [2024-07-15 16:21:07.547509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.879 [2024-07-15 16:21:07.547543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.879 [2024-07-15 16:21:07.547552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.879 [2024-07-15 16:21:07.547559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.879 [2024-07-15 16:21:07.547580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.879 qpair failed and we were unable to recover it. 00:29:31.879 [2024-07-15 16:21:07.557328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.879 [2024-07-15 16:21:07.557424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.879 [2024-07-15 16:21:07.557450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.879 [2024-07-15 16:21:07.557459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.879 [2024-07-15 16:21:07.557466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.879 [2024-07-15 16:21:07.557486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.879 qpair failed and we were unable to recover it. 00:29:31.879 [2024-07-15 16:21:07.567480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.879 [2024-07-15 16:21:07.567591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.879 [2024-07-15 16:21:07.567618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.879 [2024-07-15 16:21:07.567628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.879 [2024-07-15 16:21:07.567635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.879 [2024-07-15 16:21:07.567655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.879 qpair failed and we were unable to recover it. 00:29:31.879 [2024-07-15 16:21:07.577496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.879 [2024-07-15 16:21:07.577592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.879 [2024-07-15 16:21:07.577618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.879 [2024-07-15 16:21:07.577627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.879 [2024-07-15 16:21:07.577633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.879 [2024-07-15 16:21:07.577653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.879 qpair failed and we were unable to recover it. 00:29:31.879 [2024-07-15 16:21:07.587488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.879 [2024-07-15 16:21:07.587612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.879 [2024-07-15 16:21:07.587638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.879 [2024-07-15 16:21:07.587646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.879 [2024-07-15 16:21:07.587659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.879 [2024-07-15 16:21:07.587679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.879 qpair failed and we were unable to recover it. 00:29:31.879 [2024-07-15 16:21:07.597626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.879 [2024-07-15 16:21:07.597719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.880 [2024-07-15 16:21:07.597745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.880 [2024-07-15 16:21:07.597755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.880 [2024-07-15 16:21:07.597763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.880 [2024-07-15 16:21:07.597783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.880 qpair failed and we were unable to recover it. 00:29:31.880 [2024-07-15 16:21:07.607597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.880 [2024-07-15 16:21:07.607716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.880 [2024-07-15 16:21:07.607756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.880 [2024-07-15 16:21:07.607767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.880 [2024-07-15 16:21:07.607775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.880 [2024-07-15 16:21:07.607802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.880 qpair failed and we were unable to recover it. 00:29:31.880 [2024-07-15 16:21:07.617637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.880 [2024-07-15 16:21:07.617753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.880 [2024-07-15 16:21:07.617792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.880 [2024-07-15 16:21:07.617803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.880 [2024-07-15 16:21:07.617810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.880 [2024-07-15 16:21:07.617836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.880 qpair failed and we were unable to recover it. 00:29:31.880 [2024-07-15 16:21:07.627658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.880 [2024-07-15 16:21:07.627764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.880 [2024-07-15 16:21:07.627803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.880 [2024-07-15 16:21:07.627814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.880 [2024-07-15 16:21:07.627822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.880 [2024-07-15 16:21:07.627848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.880 qpair failed and we were unable to recover it. 00:29:31.880 [2024-07-15 16:21:07.637800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.880 [2024-07-15 16:21:07.637914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.880 [2024-07-15 16:21:07.637952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.880 [2024-07-15 16:21:07.637962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.880 [2024-07-15 16:21:07.637971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.880 [2024-07-15 16:21:07.637999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.880 qpair failed and we were unable to recover it. 00:29:31.880 [2024-07-15 16:21:07.647615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.880 [2024-07-15 16:21:07.647738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.880 [2024-07-15 16:21:07.647768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.880 [2024-07-15 16:21:07.647778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.880 [2024-07-15 16:21:07.647785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.880 [2024-07-15 16:21:07.647807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.880 qpair failed and we were unable to recover it. 00:29:31.880 [2024-07-15 16:21:07.657766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.880 [2024-07-15 16:21:07.657862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.880 [2024-07-15 16:21:07.657888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.880 [2024-07-15 16:21:07.657898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.880 [2024-07-15 16:21:07.657906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.880 [2024-07-15 16:21:07.657926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.880 qpair failed and we were unable to recover it. 00:29:31.880 [2024-07-15 16:21:07.667759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.880 [2024-07-15 16:21:07.667853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.880 [2024-07-15 16:21:07.667879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.880 [2024-07-15 16:21:07.667888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.880 [2024-07-15 16:21:07.667895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.880 [2024-07-15 16:21:07.667915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.880 qpair failed and we were unable to recover it. 00:29:31.880 [2024-07-15 16:21:07.677806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.880 [2024-07-15 16:21:07.677903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.880 [2024-07-15 16:21:07.677928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.880 [2024-07-15 16:21:07.677944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.880 [2024-07-15 16:21:07.677952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.880 [2024-07-15 16:21:07.677972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.880 qpair failed and we were unable to recover it. 00:29:31.880 [2024-07-15 16:21:07.687827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.880 [2024-07-15 16:21:07.687940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.880 [2024-07-15 16:21:07.687965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.880 [2024-07-15 16:21:07.687974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.880 [2024-07-15 16:21:07.687981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.880 [2024-07-15 16:21:07.688001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.880 qpair failed and we were unable to recover it. 00:29:31.880 [2024-07-15 16:21:07.697827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.880 [2024-07-15 16:21:07.697936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.880 [2024-07-15 16:21:07.697962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.880 [2024-07-15 16:21:07.697971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.880 [2024-07-15 16:21:07.697978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.880 [2024-07-15 16:21:07.697999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.880 qpair failed and we were unable to recover it. 00:29:31.880 [2024-07-15 16:21:07.707908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.880 [2024-07-15 16:21:07.708017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.880 [2024-07-15 16:21:07.708043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.880 [2024-07-15 16:21:07.708053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.880 [2024-07-15 16:21:07.708059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.880 [2024-07-15 16:21:07.708079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.880 qpair failed and we were unable to recover it. 00:29:31.880 [2024-07-15 16:21:07.717941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.880 [2024-07-15 16:21:07.718038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.880 [2024-07-15 16:21:07.718063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.880 [2024-07-15 16:21:07.718072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.880 [2024-07-15 16:21:07.718080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:31.880 [2024-07-15 16:21:07.718099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.880 qpair failed and we were unable to recover it. 00:29:32.143 [2024-07-15 16:21:07.727984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.143 [2024-07-15 16:21:07.728093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.143 [2024-07-15 16:21:07.728118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.143 [2024-07-15 16:21:07.728134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.143 [2024-07-15 16:21:07.728142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.143 [2024-07-15 16:21:07.728163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.143 qpair failed and we were unable to recover it. 00:29:32.143 [2024-07-15 16:21:07.738005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.143 [2024-07-15 16:21:07.738158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.143 [2024-07-15 16:21:07.738183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.143 [2024-07-15 16:21:07.738192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.143 [2024-07-15 16:21:07.738199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.143 [2024-07-15 16:21:07.738219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.143 qpair failed and we were unable to recover it. 00:29:32.143 [2024-07-15 16:21:07.748017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.143 [2024-07-15 16:21:07.748129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.143 [2024-07-15 16:21:07.748155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.143 [2024-07-15 16:21:07.748164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.143 [2024-07-15 16:21:07.748171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.143 [2024-07-15 16:21:07.748194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.143 qpair failed and we were unable to recover it. 00:29:32.143 [2024-07-15 16:21:07.758091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.143 [2024-07-15 16:21:07.758207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.143 [2024-07-15 16:21:07.758236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.143 [2024-07-15 16:21:07.758245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.143 [2024-07-15 16:21:07.758252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.143 [2024-07-15 16:21:07.758273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.143 qpair failed and we were unable to recover it. 00:29:32.143 [2024-07-15 16:21:07.768079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.143 [2024-07-15 16:21:07.768196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.143 [2024-07-15 16:21:07.768223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.143 [2024-07-15 16:21:07.768238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.143 [2024-07-15 16:21:07.768245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.143 [2024-07-15 16:21:07.768265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.143 qpair failed and we were unable to recover it. 00:29:32.143 [2024-07-15 16:21:07.778117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.143 [2024-07-15 16:21:07.778212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.143 [2024-07-15 16:21:07.778238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.143 [2024-07-15 16:21:07.778248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.143 [2024-07-15 16:21:07.778255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.143 [2024-07-15 16:21:07.778275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.143 qpair failed and we were unable to recover it. 00:29:32.143 [2024-07-15 16:21:07.788130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.143 [2024-07-15 16:21:07.788228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.143 [2024-07-15 16:21:07.788254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.143 [2024-07-15 16:21:07.788264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.143 [2024-07-15 16:21:07.788271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.143 [2024-07-15 16:21:07.788291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.143 qpair failed and we were unable to recover it. 00:29:32.143 [2024-07-15 16:21:07.798197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.143 [2024-07-15 16:21:07.798295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.143 [2024-07-15 16:21:07.798320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.143 [2024-07-15 16:21:07.798328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.143 [2024-07-15 16:21:07.798336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.143 [2024-07-15 16:21:07.798356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.143 qpair failed and we were unable to recover it. 00:29:32.143 [2024-07-15 16:21:07.808202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.143 [2024-07-15 16:21:07.808310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.144 [2024-07-15 16:21:07.808335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.144 [2024-07-15 16:21:07.808344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.144 [2024-07-15 16:21:07.808351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.144 [2024-07-15 16:21:07.808372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.144 qpair failed and we were unable to recover it. 00:29:32.144 [2024-07-15 16:21:07.818245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.144 [2024-07-15 16:21:07.818436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.144 [2024-07-15 16:21:07.818461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.144 [2024-07-15 16:21:07.818470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.144 [2024-07-15 16:21:07.818477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.144 [2024-07-15 16:21:07.818497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.144 qpair failed and we were unable to recover it. 00:29:32.144 [2024-07-15 16:21:07.828269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.144 [2024-07-15 16:21:07.828384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.144 [2024-07-15 16:21:07.828411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.144 [2024-07-15 16:21:07.828420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.144 [2024-07-15 16:21:07.828427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.144 [2024-07-15 16:21:07.828447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.144 qpair failed and we were unable to recover it. 00:29:32.144 [2024-07-15 16:21:07.838294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.144 [2024-07-15 16:21:07.838392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.144 [2024-07-15 16:21:07.838418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.144 [2024-07-15 16:21:07.838426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.144 [2024-07-15 16:21:07.838433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.144 [2024-07-15 16:21:07.838453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.144 qpair failed and we were unable to recover it. 00:29:32.144 [2024-07-15 16:21:07.848322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.144 [2024-07-15 16:21:07.848423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.144 [2024-07-15 16:21:07.848450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.144 [2024-07-15 16:21:07.848459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.144 [2024-07-15 16:21:07.848467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.144 [2024-07-15 16:21:07.848486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.144 qpair failed and we were unable to recover it. 00:29:32.144 [2024-07-15 16:21:07.858358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.144 [2024-07-15 16:21:07.858454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.144 [2024-07-15 16:21:07.858485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.144 [2024-07-15 16:21:07.858494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.144 [2024-07-15 16:21:07.858501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.144 [2024-07-15 16:21:07.858521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.144 qpair failed and we were unable to recover it. 00:29:32.144 [2024-07-15 16:21:07.868437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.144 [2024-07-15 16:21:07.868526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.144 [2024-07-15 16:21:07.868552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.144 [2024-07-15 16:21:07.868562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.144 [2024-07-15 16:21:07.868569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.144 [2024-07-15 16:21:07.868589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.144 qpair failed and we were unable to recover it. 00:29:32.144 [2024-07-15 16:21:07.878478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.144 [2024-07-15 16:21:07.878577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.144 [2024-07-15 16:21:07.878603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.144 [2024-07-15 16:21:07.878612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.144 [2024-07-15 16:21:07.878619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.144 [2024-07-15 16:21:07.878639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.144 qpair failed and we were unable to recover it. 00:29:32.144 [2024-07-15 16:21:07.888488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.144 [2024-07-15 16:21:07.888594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.144 [2024-07-15 16:21:07.888620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.144 [2024-07-15 16:21:07.888629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.144 [2024-07-15 16:21:07.888636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.144 [2024-07-15 16:21:07.888655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.144 qpair failed and we were unable to recover it. 00:29:32.144 [2024-07-15 16:21:07.898465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.144 [2024-07-15 16:21:07.898565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.144 [2024-07-15 16:21:07.898591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.144 [2024-07-15 16:21:07.898600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.144 [2024-07-15 16:21:07.898606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.144 [2024-07-15 16:21:07.898631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.144 qpair failed and we were unable to recover it. 00:29:32.144 [2024-07-15 16:21:07.908477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.144 [2024-07-15 16:21:07.908570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.144 [2024-07-15 16:21:07.908595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.144 [2024-07-15 16:21:07.908605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.144 [2024-07-15 16:21:07.908612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.144 [2024-07-15 16:21:07.908631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.144 qpair failed and we were unable to recover it. 00:29:32.144 [2024-07-15 16:21:07.918559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.144 [2024-07-15 16:21:07.918655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.144 [2024-07-15 16:21:07.918681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.144 [2024-07-15 16:21:07.918689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.144 [2024-07-15 16:21:07.918696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.144 [2024-07-15 16:21:07.918715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.144 qpair failed and we were unable to recover it. 00:29:32.144 [2024-07-15 16:21:07.928506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.144 [2024-07-15 16:21:07.928662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.144 [2024-07-15 16:21:07.928687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.144 [2024-07-15 16:21:07.928696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.144 [2024-07-15 16:21:07.928703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.144 [2024-07-15 16:21:07.928724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.144 qpair failed and we were unable to recover it. 00:29:32.144 [2024-07-15 16:21:07.938585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.144 [2024-07-15 16:21:07.938708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.144 [2024-07-15 16:21:07.938734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.144 [2024-07-15 16:21:07.938742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.144 [2024-07-15 16:21:07.938750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.144 [2024-07-15 16:21:07.938769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.144 qpair failed and we were unable to recover it. 00:29:32.144 [2024-07-15 16:21:07.948665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.144 [2024-07-15 16:21:07.948771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.144 [2024-07-15 16:21:07.948816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.145 [2024-07-15 16:21:07.948828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.145 [2024-07-15 16:21:07.948836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.145 [2024-07-15 16:21:07.948864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 16:21:07.958665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.145 [2024-07-15 16:21:07.958780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.145 [2024-07-15 16:21:07.958818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.145 [2024-07-15 16:21:07.958829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.145 [2024-07-15 16:21:07.958837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.145 [2024-07-15 16:21:07.958864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 16:21:07.968601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.145 [2024-07-15 16:21:07.968710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.145 [2024-07-15 16:21:07.968741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.145 [2024-07-15 16:21:07.968751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.145 [2024-07-15 16:21:07.968758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.145 [2024-07-15 16:21:07.968783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 16:21:07.978725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.145 [2024-07-15 16:21:07.978827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.145 [2024-07-15 16:21:07.978867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.145 [2024-07-15 16:21:07.978878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.145 [2024-07-15 16:21:07.978886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.145 [2024-07-15 16:21:07.978911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.406 [2024-07-15 16:21:07.988750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.406 [2024-07-15 16:21:07.988842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.406 [2024-07-15 16:21:07.988872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.406 [2024-07-15 16:21:07.988881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.406 [2024-07-15 16:21:07.988895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.406 [2024-07-15 16:21:07.988917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.406 qpair failed and we were unable to recover it. 00:29:32.406 [2024-07-15 16:21:07.998777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.406 [2024-07-15 16:21:07.998874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.406 [2024-07-15 16:21:07.998902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.406 [2024-07-15 16:21:07.998911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.406 [2024-07-15 16:21:07.998918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.406 [2024-07-15 16:21:07.998939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.406 qpair failed and we were unable to recover it. 00:29:32.406 [2024-07-15 16:21:08.008819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.406 [2024-07-15 16:21:08.008941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.406 [2024-07-15 16:21:08.008967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.406 [2024-07-15 16:21:08.008977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.406 [2024-07-15 16:21:08.008985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.406 [2024-07-15 16:21:08.009005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.406 qpair failed and we were unable to recover it. 00:29:32.406 [2024-07-15 16:21:08.018854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.406 [2024-07-15 16:21:08.018962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.406 [2024-07-15 16:21:08.018988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.406 [2024-07-15 16:21:08.018996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.406 [2024-07-15 16:21:08.019003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.406 [2024-07-15 16:21:08.019023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.406 qpair failed and we were unable to recover it. 00:29:32.406 [2024-07-15 16:21:08.028815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.406 [2024-07-15 16:21:08.028901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.406 [2024-07-15 16:21:08.028929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.406 [2024-07-15 16:21:08.028938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.406 [2024-07-15 16:21:08.028945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.406 [2024-07-15 16:21:08.028965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.406 qpair failed and we were unable to recover it. 00:29:32.406 [2024-07-15 16:21:08.038936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.406 [2024-07-15 16:21:08.039042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.406 [2024-07-15 16:21:08.039069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.406 [2024-07-15 16:21:08.039078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.406 [2024-07-15 16:21:08.039085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.406 [2024-07-15 16:21:08.039106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.406 qpair failed and we were unable to recover it. 00:29:32.406 [2024-07-15 16:21:08.048961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.407 [2024-07-15 16:21:08.049067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.407 [2024-07-15 16:21:08.049093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.407 [2024-07-15 16:21:08.049102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.407 [2024-07-15 16:21:08.049109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.407 [2024-07-15 16:21:08.049135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.407 qpair failed and we were unable to recover it. 00:29:32.407 [2024-07-15 16:21:08.058971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.407 [2024-07-15 16:21:08.059061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.407 [2024-07-15 16:21:08.059087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.407 [2024-07-15 16:21:08.059096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.407 [2024-07-15 16:21:08.059103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.407 [2024-07-15 16:21:08.059128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.407 qpair failed and we were unable to recover it. 00:29:32.407 [2024-07-15 16:21:08.068986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.407 [2024-07-15 16:21:08.069081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.407 [2024-07-15 16:21:08.069107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.407 [2024-07-15 16:21:08.069115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.407 [2024-07-15 16:21:08.069129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.407 [2024-07-15 16:21:08.069150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.407 qpair failed and we were unable to recover it. 00:29:32.407 [2024-07-15 16:21:08.079023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.407 [2024-07-15 16:21:08.079120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.407 [2024-07-15 16:21:08.079152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.407 [2024-07-15 16:21:08.079168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.407 [2024-07-15 16:21:08.079175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.407 [2024-07-15 16:21:08.079196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.407 qpair failed and we were unable to recover it. 00:29:32.407 [2024-07-15 16:21:08.089080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.407 [2024-07-15 16:21:08.089204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.407 [2024-07-15 16:21:08.089233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.407 [2024-07-15 16:21:08.089242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.407 [2024-07-15 16:21:08.089253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.407 [2024-07-15 16:21:08.089275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.407 qpair failed and we were unable to recover it. 00:29:32.407 [2024-07-15 16:21:08.099109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.407 [2024-07-15 16:21:08.099208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.407 [2024-07-15 16:21:08.099236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.407 [2024-07-15 16:21:08.099245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.407 [2024-07-15 16:21:08.099252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.407 [2024-07-15 16:21:08.099273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.407 qpair failed and we were unable to recover it. 00:29:32.407 [2024-07-15 16:21:08.109192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.407 [2024-07-15 16:21:08.109320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.407 [2024-07-15 16:21:08.109345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.407 [2024-07-15 16:21:08.109354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.407 [2024-07-15 16:21:08.109361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.407 [2024-07-15 16:21:08.109381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.407 qpair failed and we were unable to recover it. 00:29:32.407 [2024-07-15 16:21:08.119186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.407 [2024-07-15 16:21:08.119283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.407 [2024-07-15 16:21:08.119309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.407 [2024-07-15 16:21:08.119319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.407 [2024-07-15 16:21:08.119326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.407 [2024-07-15 16:21:08.119346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.407 qpair failed and we were unable to recover it. 00:29:32.407 [2024-07-15 16:21:08.129208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.407 [2024-07-15 16:21:08.129312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.407 [2024-07-15 16:21:08.129337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.407 [2024-07-15 16:21:08.129346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.407 [2024-07-15 16:21:08.129354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.407 [2024-07-15 16:21:08.129373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.407 qpair failed and we were unable to recover it. 00:29:32.407 [2024-07-15 16:21:08.139093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.407 [2024-07-15 16:21:08.139194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.407 [2024-07-15 16:21:08.139221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.407 [2024-07-15 16:21:08.139229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.407 [2024-07-15 16:21:08.139237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.407 [2024-07-15 16:21:08.139257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.407 qpair failed and we were unable to recover it. 00:29:32.407 [2024-07-15 16:21:08.149253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.407 [2024-07-15 16:21:08.149354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.407 [2024-07-15 16:21:08.149379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.407 [2024-07-15 16:21:08.149388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.407 [2024-07-15 16:21:08.149395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.407 [2024-07-15 16:21:08.149416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.407 qpair failed and we were unable to recover it. 00:29:32.407 [2024-07-15 16:21:08.159355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.407 [2024-07-15 16:21:08.159454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.407 [2024-07-15 16:21:08.159480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.407 [2024-07-15 16:21:08.159489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.407 [2024-07-15 16:21:08.159496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.407 [2024-07-15 16:21:08.159517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.407 qpair failed and we were unable to recover it. 00:29:32.407 [2024-07-15 16:21:08.169318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.407 [2024-07-15 16:21:08.169437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.407 [2024-07-15 16:21:08.169463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.407 [2024-07-15 16:21:08.169484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.407 [2024-07-15 16:21:08.169491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.407 [2024-07-15 16:21:08.169512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.407 qpair failed and we were unable to recover it. 00:29:32.407 [2024-07-15 16:21:08.179341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.407 [2024-07-15 16:21:08.179436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.407 [2024-07-15 16:21:08.179461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.407 [2024-07-15 16:21:08.179471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.407 [2024-07-15 16:21:08.179478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.407 [2024-07-15 16:21:08.179497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.407 qpair failed and we were unable to recover it. 00:29:32.407 [2024-07-15 16:21:08.189377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.407 [2024-07-15 16:21:08.189470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.407 [2024-07-15 16:21:08.189496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.407 [2024-07-15 16:21:08.189505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.408 [2024-07-15 16:21:08.189512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.408 [2024-07-15 16:21:08.189532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.408 qpair failed and we were unable to recover it. 00:29:32.408 [2024-07-15 16:21:08.199393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.408 [2024-07-15 16:21:08.199522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.408 [2024-07-15 16:21:08.199547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.408 [2024-07-15 16:21:08.199556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.408 [2024-07-15 16:21:08.199563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.408 [2024-07-15 16:21:08.199582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.408 qpair failed and we were unable to recover it. 00:29:32.408 [2024-07-15 16:21:08.209437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.408 [2024-07-15 16:21:08.209562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.408 [2024-07-15 16:21:08.209588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.408 [2024-07-15 16:21:08.209596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.408 [2024-07-15 16:21:08.209603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.408 [2024-07-15 16:21:08.209622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.408 qpair failed and we were unable to recover it. 00:29:32.408 [2024-07-15 16:21:08.219468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.408 [2024-07-15 16:21:08.219572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.408 [2024-07-15 16:21:08.219598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.408 [2024-07-15 16:21:08.219607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.408 [2024-07-15 16:21:08.219615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.408 [2024-07-15 16:21:08.219635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.408 qpair failed and we were unable to recover it. 00:29:32.408 [2024-07-15 16:21:08.229468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.408 [2024-07-15 16:21:08.229578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.408 [2024-07-15 16:21:08.229603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.408 [2024-07-15 16:21:08.229612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.408 [2024-07-15 16:21:08.229618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.408 [2024-07-15 16:21:08.229639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.408 qpair failed and we were unable to recover it. 00:29:32.408 [2024-07-15 16:21:08.239507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.408 [2024-07-15 16:21:08.239696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.408 [2024-07-15 16:21:08.239721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.408 [2024-07-15 16:21:08.239730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.408 [2024-07-15 16:21:08.239737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.408 [2024-07-15 16:21:08.239756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.408 qpair failed and we were unable to recover it. 00:29:32.669 [2024-07-15 16:21:08.249567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.669 [2024-07-15 16:21:08.249681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.669 [2024-07-15 16:21:08.249707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.669 [2024-07-15 16:21:08.249716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.669 [2024-07-15 16:21:08.249723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.669 [2024-07-15 16:21:08.249745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.669 qpair failed and we were unable to recover it. 00:29:32.669 [2024-07-15 16:21:08.259571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.669 [2024-07-15 16:21:08.259662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.669 [2024-07-15 16:21:08.259694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.669 [2024-07-15 16:21:08.259703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.669 [2024-07-15 16:21:08.259709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.669 [2024-07-15 16:21:08.259730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.669 qpair failed and we were unable to recover it. 00:29:32.669 [2024-07-15 16:21:08.269613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.669 [2024-07-15 16:21:08.269730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.669 [2024-07-15 16:21:08.269757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.669 [2024-07-15 16:21:08.269765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.669 [2024-07-15 16:21:08.269772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.669 [2024-07-15 16:21:08.269793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.669 qpair failed and we were unable to recover it. 00:29:32.669 [2024-07-15 16:21:08.279653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.669 [2024-07-15 16:21:08.279761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.669 [2024-07-15 16:21:08.279800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.669 [2024-07-15 16:21:08.279810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.669 [2024-07-15 16:21:08.279818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.669 [2024-07-15 16:21:08.279844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.669 qpair failed and we were unable to recover it. 00:29:32.669 [2024-07-15 16:21:08.289671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.669 [2024-07-15 16:21:08.289895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.669 [2024-07-15 16:21:08.289924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.669 [2024-07-15 16:21:08.289932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.669 [2024-07-15 16:21:08.289939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.669 [2024-07-15 16:21:08.289962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.669 qpair failed and we were unable to recover it. 00:29:32.669 [2024-07-15 16:21:08.299718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.669 [2024-07-15 16:21:08.299826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.669 [2024-07-15 16:21:08.299853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.669 [2024-07-15 16:21:08.299863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.669 [2024-07-15 16:21:08.299870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.669 [2024-07-15 16:21:08.299897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.669 qpair failed and we were unable to recover it. 00:29:32.669 [2024-07-15 16:21:08.309743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.669 [2024-07-15 16:21:08.309845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.669 [2024-07-15 16:21:08.309883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.669 [2024-07-15 16:21:08.309894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.669 [2024-07-15 16:21:08.309902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.669 [2024-07-15 16:21:08.309927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.669 qpair failed and we were unable to recover it. 00:29:32.669 [2024-07-15 16:21:08.319788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.669 [2024-07-15 16:21:08.319884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.669 [2024-07-15 16:21:08.319913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.669 [2024-07-15 16:21:08.319925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.669 [2024-07-15 16:21:08.319933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.669 [2024-07-15 16:21:08.319955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.669 qpair failed and we were unable to recover it. 00:29:32.669 [2024-07-15 16:21:08.329838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.669 [2024-07-15 16:21:08.329955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.669 [2024-07-15 16:21:08.329993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.669 [2024-07-15 16:21:08.330004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.669 [2024-07-15 16:21:08.330011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.669 [2024-07-15 16:21:08.330037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.669 qpair failed and we were unable to recover it. 00:29:32.669 [2024-07-15 16:21:08.339836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.669 [2024-07-15 16:21:08.339931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.669 [2024-07-15 16:21:08.339960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.669 [2024-07-15 16:21:08.339969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.669 [2024-07-15 16:21:08.339977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.669 [2024-07-15 16:21:08.339999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.669 qpair failed and we were unable to recover it. 00:29:32.669 [2024-07-15 16:21:08.349867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.669 [2024-07-15 16:21:08.349974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.669 [2024-07-15 16:21:08.350007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.670 [2024-07-15 16:21:08.350016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.670 [2024-07-15 16:21:08.350023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.670 [2024-07-15 16:21:08.350044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.670 qpair failed and we were unable to recover it. 00:29:32.670 [2024-07-15 16:21:08.359988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.670 [2024-07-15 16:21:08.360093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.670 [2024-07-15 16:21:08.360120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.670 [2024-07-15 16:21:08.360136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.670 [2024-07-15 16:21:08.360143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:32.670 [2024-07-15 16:21:08.360167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.670 qpair failed and we were unable to recover it. 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Write completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Write completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Write completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Write completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Write completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Write completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Write completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Write completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Write completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Write completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Write completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Read completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 Write completed with error (sct=0, sc=8) 00:29:32.670 starting I/O failed 00:29:32.670 [2024-07-15 16:21:08.360514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.670 [2024-07-15 16:21:08.369853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.670 [2024-07-15 16:21:08.369972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.670 [2024-07-15 16:21:08.370005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.670 [2024-07-15 16:21:08.370020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.670 [2024-07-15 16:21:08.370027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.670 [2024-07-15 16:21:08.370052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.670 qpair failed and we were unable to recover it. 00:29:32.670 [2024-07-15 16:21:08.379913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.670 [2024-07-15 16:21:08.380011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.670 [2024-07-15 16:21:08.380052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.670 [2024-07-15 16:21:08.380063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.670 [2024-07-15 16:21:08.380070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.670 [2024-07-15 16:21:08.380096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.670 qpair failed and we were unable to recover it. 00:29:32.670 [2024-07-15 16:21:08.389970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.670 [2024-07-15 16:21:08.390084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.670 [2024-07-15 16:21:08.390126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.670 [2024-07-15 16:21:08.390137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.670 [2024-07-15 16:21:08.390144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.670 [2024-07-15 16:21:08.390170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.670 qpair failed and we were unable to recover it. 00:29:32.670 [2024-07-15 16:21:08.400026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.670 [2024-07-15 16:21:08.400161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.670 [2024-07-15 16:21:08.400197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.670 [2024-07-15 16:21:08.400207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.670 [2024-07-15 16:21:08.400214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.670 [2024-07-15 16:21:08.400238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.670 qpair failed and we were unable to recover it. 00:29:32.670 [2024-07-15 16:21:08.410047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.670 [2024-07-15 16:21:08.410158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.670 [2024-07-15 16:21:08.410183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.670 [2024-07-15 16:21:08.410192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.670 [2024-07-15 16:21:08.410198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.670 [2024-07-15 16:21:08.410217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.670 qpair failed and we were unable to recover it. 00:29:32.670 [2024-07-15 16:21:08.420022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.670 [2024-07-15 16:21:08.420111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.670 [2024-07-15 16:21:08.420140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.670 [2024-07-15 16:21:08.420149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.670 [2024-07-15 16:21:08.420157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.670 [2024-07-15 16:21:08.420175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.670 qpair failed and we were unable to recover it. 00:29:32.670 [2024-07-15 16:21:08.430058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.670 [2024-07-15 16:21:08.430159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.670 [2024-07-15 16:21:08.430183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.670 [2024-07-15 16:21:08.430191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.670 [2024-07-15 16:21:08.430198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.670 [2024-07-15 16:21:08.430217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.670 qpair failed and we were unable to recover it. 00:29:32.670 [2024-07-15 16:21:08.440085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.670 [2024-07-15 16:21:08.440179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.670 [2024-07-15 16:21:08.440201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.670 [2024-07-15 16:21:08.440210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.670 [2024-07-15 16:21:08.440217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.670 [2024-07-15 16:21:08.440234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.670 qpair failed and we were unable to recover it. 00:29:32.670 [2024-07-15 16:21:08.450104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.670 [2024-07-15 16:21:08.450202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.670 [2024-07-15 16:21:08.450224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.670 [2024-07-15 16:21:08.450232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.670 [2024-07-15 16:21:08.450240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.670 [2024-07-15 16:21:08.450258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.670 qpair failed and we were unable to recover it. 00:29:32.671 [2024-07-15 16:21:08.460040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.671 [2024-07-15 16:21:08.460128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.671 [2024-07-15 16:21:08.460152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.671 [2024-07-15 16:21:08.460161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.671 [2024-07-15 16:21:08.460168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.671 [2024-07-15 16:21:08.460185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.671 qpair failed and we were unable to recover it. 00:29:32.671 [2024-07-15 16:21:08.470200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.671 [2024-07-15 16:21:08.470284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.671 [2024-07-15 16:21:08.470302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.671 [2024-07-15 16:21:08.470310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.671 [2024-07-15 16:21:08.470317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.671 [2024-07-15 16:21:08.470333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.671 qpair failed and we were unable to recover it. 00:29:32.671 [2024-07-15 16:21:08.480104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.671 [2024-07-15 16:21:08.480209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.671 [2024-07-15 16:21:08.480229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.671 [2024-07-15 16:21:08.480237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.671 [2024-07-15 16:21:08.480243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.671 [2024-07-15 16:21:08.480259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.671 qpair failed and we were unable to recover it. 00:29:32.671 [2024-07-15 16:21:08.490213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.671 [2024-07-15 16:21:08.490316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.671 [2024-07-15 16:21:08.490334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.671 [2024-07-15 16:21:08.490343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.671 [2024-07-15 16:21:08.490349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.671 [2024-07-15 16:21:08.490365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.671 qpair failed and we were unable to recover it. 00:29:32.671 [2024-07-15 16:21:08.500221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.671 [2024-07-15 16:21:08.500303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.671 [2024-07-15 16:21:08.500321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.671 [2024-07-15 16:21:08.500329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.671 [2024-07-15 16:21:08.500337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.671 [2024-07-15 16:21:08.500357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.671 qpair failed and we were unable to recover it. 00:29:32.931 [2024-07-15 16:21:08.510262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.931 [2024-07-15 16:21:08.510352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.931 [2024-07-15 16:21:08.510370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.931 [2024-07-15 16:21:08.510378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.931 [2024-07-15 16:21:08.510385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.931 [2024-07-15 16:21:08.510400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.931 qpair failed and we were unable to recover it. 00:29:32.931 [2024-07-15 16:21:08.520325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.931 [2024-07-15 16:21:08.520410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.931 [2024-07-15 16:21:08.520427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.931 [2024-07-15 16:21:08.520436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.932 [2024-07-15 16:21:08.520442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.932 [2024-07-15 16:21:08.520457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.932 qpair failed and we were unable to recover it. 00:29:32.932 [2024-07-15 16:21:08.530314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.932 [2024-07-15 16:21:08.530438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.932 [2024-07-15 16:21:08.530456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.932 [2024-07-15 16:21:08.530464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.932 [2024-07-15 16:21:08.530471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.932 [2024-07-15 16:21:08.530486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.932 qpair failed and we were unable to recover it. 00:29:32.932 [2024-07-15 16:21:08.540367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.932 [2024-07-15 16:21:08.540470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.932 [2024-07-15 16:21:08.540487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.932 [2024-07-15 16:21:08.540495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.932 [2024-07-15 16:21:08.540501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.932 [2024-07-15 16:21:08.540516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.932 qpair failed and we were unable to recover it. 00:29:32.932 [2024-07-15 16:21:08.550266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.932 [2024-07-15 16:21:08.550354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.932 [2024-07-15 16:21:08.550375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.932 [2024-07-15 16:21:08.550382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.932 [2024-07-15 16:21:08.550389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.932 [2024-07-15 16:21:08.550404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.932 qpair failed and we were unable to recover it. 00:29:32.932 [2024-07-15 16:21:08.560422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.932 [2024-07-15 16:21:08.560542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.932 [2024-07-15 16:21:08.560560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.932 [2024-07-15 16:21:08.560568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.932 [2024-07-15 16:21:08.560575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.932 [2024-07-15 16:21:08.560591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.932 qpair failed and we were unable to recover it. 00:29:32.932 [2024-07-15 16:21:08.570452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.932 [2024-07-15 16:21:08.570538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.932 [2024-07-15 16:21:08.570554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.932 [2024-07-15 16:21:08.570562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.932 [2024-07-15 16:21:08.570568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.932 [2024-07-15 16:21:08.570583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.932 qpair failed and we were unable to recover it. 00:29:32.932 [2024-07-15 16:21:08.580472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.932 [2024-07-15 16:21:08.580554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.932 [2024-07-15 16:21:08.580570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.932 [2024-07-15 16:21:08.580578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.932 [2024-07-15 16:21:08.580584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.932 [2024-07-15 16:21:08.580598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.932 qpair failed and we were unable to recover it. 00:29:32.932 [2024-07-15 16:21:08.590481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.932 [2024-07-15 16:21:08.590570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.932 [2024-07-15 16:21:08.590586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.932 [2024-07-15 16:21:08.590594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.932 [2024-07-15 16:21:08.590600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.932 [2024-07-15 16:21:08.590618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.932 qpair failed and we were unable to recover it. 00:29:32.932 [2024-07-15 16:21:08.600558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.932 [2024-07-15 16:21:08.600647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.932 [2024-07-15 16:21:08.600663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.932 [2024-07-15 16:21:08.600671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.932 [2024-07-15 16:21:08.600677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.932 [2024-07-15 16:21:08.600692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.932 qpair failed and we were unable to recover it. 00:29:32.932 [2024-07-15 16:21:08.610545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.932 [2024-07-15 16:21:08.610631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.932 [2024-07-15 16:21:08.610647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.932 [2024-07-15 16:21:08.610655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.932 [2024-07-15 16:21:08.610662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.932 [2024-07-15 16:21:08.610676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.932 qpair failed and we were unable to recover it. 00:29:32.932 [2024-07-15 16:21:08.620570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.932 [2024-07-15 16:21:08.620653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.932 [2024-07-15 16:21:08.620668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.932 [2024-07-15 16:21:08.620676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.932 [2024-07-15 16:21:08.620683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.932 [2024-07-15 16:21:08.620697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.932 qpair failed and we were unable to recover it. 00:29:32.932 [2024-07-15 16:21:08.630616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.932 [2024-07-15 16:21:08.630692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.932 [2024-07-15 16:21:08.630708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.932 [2024-07-15 16:21:08.630715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.932 [2024-07-15 16:21:08.630722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.932 [2024-07-15 16:21:08.630736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.932 qpair failed and we were unable to recover it. 00:29:32.932 [2024-07-15 16:21:08.640606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.932 [2024-07-15 16:21:08.640688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.932 [2024-07-15 16:21:08.640708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.932 [2024-07-15 16:21:08.640717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.932 [2024-07-15 16:21:08.640723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.932 [2024-07-15 16:21:08.640738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.932 qpair failed and we were unable to recover it. 00:29:32.932 [2024-07-15 16:21:08.650718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.932 [2024-07-15 16:21:08.650799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.932 [2024-07-15 16:21:08.650815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.932 [2024-07-15 16:21:08.650822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.932 [2024-07-15 16:21:08.650830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.932 [2024-07-15 16:21:08.650844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.932 qpair failed and we were unable to recover it. 00:29:32.932 [2024-07-15 16:21:08.660689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.933 [2024-07-15 16:21:08.660779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.933 [2024-07-15 16:21:08.660805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.933 [2024-07-15 16:21:08.660814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.933 [2024-07-15 16:21:08.660822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.933 [2024-07-15 16:21:08.660841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.933 qpair failed and we were unable to recover it. 00:29:32.933 [2024-07-15 16:21:08.670747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.933 [2024-07-15 16:21:08.670835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.933 [2024-07-15 16:21:08.670861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.933 [2024-07-15 16:21:08.670870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.933 [2024-07-15 16:21:08.670877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.933 [2024-07-15 16:21:08.670896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.933 qpair failed and we were unable to recover it. 00:29:32.933 [2024-07-15 16:21:08.680762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.933 [2024-07-15 16:21:08.680849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.933 [2024-07-15 16:21:08.680875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.933 [2024-07-15 16:21:08.680884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.933 [2024-07-15 16:21:08.680891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.933 [2024-07-15 16:21:08.680915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.933 qpair failed and we were unable to recover it. 00:29:32.933 [2024-07-15 16:21:08.690809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.933 [2024-07-15 16:21:08.690902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.933 [2024-07-15 16:21:08.690928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.933 [2024-07-15 16:21:08.690937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.933 [2024-07-15 16:21:08.690944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.933 [2024-07-15 16:21:08.690963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.933 qpair failed and we were unable to recover it. 00:29:32.933 [2024-07-15 16:21:08.700710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.933 [2024-07-15 16:21:08.700800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.933 [2024-07-15 16:21:08.700826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.933 [2024-07-15 16:21:08.700835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.933 [2024-07-15 16:21:08.700841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.933 [2024-07-15 16:21:08.700861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.933 qpair failed and we were unable to recover it. 00:29:32.933 [2024-07-15 16:21:08.710837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.933 [2024-07-15 16:21:08.710917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.933 [2024-07-15 16:21:08.710935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.933 [2024-07-15 16:21:08.710943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.933 [2024-07-15 16:21:08.710950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.933 [2024-07-15 16:21:08.710965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.933 qpair failed and we were unable to recover it. 00:29:32.933 [2024-07-15 16:21:08.720856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.933 [2024-07-15 16:21:08.720941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.933 [2024-07-15 16:21:08.720957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.933 [2024-07-15 16:21:08.720965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.933 [2024-07-15 16:21:08.720972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.933 [2024-07-15 16:21:08.720986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.933 qpair failed and we were unable to recover it. 00:29:32.933 [2024-07-15 16:21:08.730867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.933 [2024-07-15 16:21:08.730954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.933 [2024-07-15 16:21:08.730977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.933 [2024-07-15 16:21:08.730985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.933 [2024-07-15 16:21:08.730991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.933 [2024-07-15 16:21:08.731006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.933 qpair failed and we were unable to recover it. 00:29:32.933 [2024-07-15 16:21:08.740898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.933 [2024-07-15 16:21:08.740976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.933 [2024-07-15 16:21:08.740993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.933 [2024-07-15 16:21:08.741002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.933 [2024-07-15 16:21:08.741008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.933 [2024-07-15 16:21:08.741023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.933 qpair failed and we were unable to recover it. 00:29:32.933 [2024-07-15 16:21:08.751014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.933 [2024-07-15 16:21:08.751092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.933 [2024-07-15 16:21:08.751108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.933 [2024-07-15 16:21:08.751116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.933 [2024-07-15 16:21:08.751126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.933 [2024-07-15 16:21:08.751142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.933 qpair failed and we were unable to recover it. 00:29:32.933 [2024-07-15 16:21:08.760980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.933 [2024-07-15 16:21:08.761070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.933 [2024-07-15 16:21:08.761087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.933 [2024-07-15 16:21:08.761094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.933 [2024-07-15 16:21:08.761101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.933 [2024-07-15 16:21:08.761115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.933 qpair failed and we were unable to recover it. 00:29:32.933 [2024-07-15 16:21:08.771009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.933 [2024-07-15 16:21:08.771093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.933 [2024-07-15 16:21:08.771108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.933 [2024-07-15 16:21:08.771116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.933 [2024-07-15 16:21:08.771129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:32.933 [2024-07-15 16:21:08.771143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.933 qpair failed and we were unable to recover it. 00:29:33.194 [2024-07-15 16:21:08.780906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.194 [2024-07-15 16:21:08.780984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.194 [2024-07-15 16:21:08.781002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.194 [2024-07-15 16:21:08.781009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.194 [2024-07-15 16:21:08.781016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.195 [2024-07-15 16:21:08.781031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-07-15 16:21:08.790995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.195 [2024-07-15 16:21:08.791079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.195 [2024-07-15 16:21:08.791096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.195 [2024-07-15 16:21:08.791104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.195 [2024-07-15 16:21:08.791111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.195 [2024-07-15 16:21:08.791128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-07-15 16:21:08.801111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.195 [2024-07-15 16:21:08.801433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.195 [2024-07-15 16:21:08.801450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.195 [2024-07-15 16:21:08.801458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.195 [2024-07-15 16:21:08.801465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.195 [2024-07-15 16:21:08.801479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-07-15 16:21:08.811128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.195 [2024-07-15 16:21:08.811214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.195 [2024-07-15 16:21:08.811231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.195 [2024-07-15 16:21:08.811238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.195 [2024-07-15 16:21:08.811245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.195 [2024-07-15 16:21:08.811260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-07-15 16:21:08.821108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.195 [2024-07-15 16:21:08.821195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.195 [2024-07-15 16:21:08.821212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.195 [2024-07-15 16:21:08.821220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.195 [2024-07-15 16:21:08.821227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.195 [2024-07-15 16:21:08.821241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-07-15 16:21:08.831052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.195 [2024-07-15 16:21:08.831140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.195 [2024-07-15 16:21:08.831156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.195 [2024-07-15 16:21:08.831164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.195 [2024-07-15 16:21:08.831170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.195 [2024-07-15 16:21:08.831185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-07-15 16:21:08.841187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.195 [2024-07-15 16:21:08.841270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.195 [2024-07-15 16:21:08.841286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.195 [2024-07-15 16:21:08.841294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.195 [2024-07-15 16:21:08.841300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.195 [2024-07-15 16:21:08.841314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-07-15 16:21:08.851195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.195 [2024-07-15 16:21:08.851279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.195 [2024-07-15 16:21:08.851295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.195 [2024-07-15 16:21:08.851304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.195 [2024-07-15 16:21:08.851310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.195 [2024-07-15 16:21:08.851325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-07-15 16:21:08.861234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.195 [2024-07-15 16:21:08.861313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.195 [2024-07-15 16:21:08.861329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.195 [2024-07-15 16:21:08.861337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.195 [2024-07-15 16:21:08.861348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.195 [2024-07-15 16:21:08.861362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-07-15 16:21:08.871266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.195 [2024-07-15 16:21:08.871348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.195 [2024-07-15 16:21:08.871364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.195 [2024-07-15 16:21:08.871373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.195 [2024-07-15 16:21:08.871379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.195 [2024-07-15 16:21:08.871393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-07-15 16:21:08.881315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.195 [2024-07-15 16:21:08.881397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.195 [2024-07-15 16:21:08.881413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.195 [2024-07-15 16:21:08.881421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.195 [2024-07-15 16:21:08.881427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.195 [2024-07-15 16:21:08.881441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-07-15 16:21:08.891352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.195 [2024-07-15 16:21:08.891485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.195 [2024-07-15 16:21:08.891502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.195 [2024-07-15 16:21:08.891509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.195 [2024-07-15 16:21:08.891516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.195 [2024-07-15 16:21:08.891530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-07-15 16:21:08.901371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.195 [2024-07-15 16:21:08.901551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.195 [2024-07-15 16:21:08.901567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.195 [2024-07-15 16:21:08.901574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.195 [2024-07-15 16:21:08.901580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.195 [2024-07-15 16:21:08.901594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-07-15 16:21:08.911375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.195 [2024-07-15 16:21:08.911455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.195 [2024-07-15 16:21:08.911471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.195 [2024-07-15 16:21:08.911479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.195 [2024-07-15 16:21:08.911486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.196 [2024-07-15 16:21:08.911500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-07-15 16:21:08.921428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.196 [2024-07-15 16:21:08.921512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.196 [2024-07-15 16:21:08.921528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.196 [2024-07-15 16:21:08.921536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.196 [2024-07-15 16:21:08.921542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.196 [2024-07-15 16:21:08.921556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-07-15 16:21:08.931451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.196 [2024-07-15 16:21:08.931536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.196 [2024-07-15 16:21:08.931552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.196 [2024-07-15 16:21:08.931561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.196 [2024-07-15 16:21:08.931567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.196 [2024-07-15 16:21:08.931581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-07-15 16:21:08.941481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.196 [2024-07-15 16:21:08.941568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.196 [2024-07-15 16:21:08.941583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.196 [2024-07-15 16:21:08.941591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.196 [2024-07-15 16:21:08.941597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.196 [2024-07-15 16:21:08.941611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-07-15 16:21:08.951473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.196 [2024-07-15 16:21:08.951549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.196 [2024-07-15 16:21:08.951565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.196 [2024-07-15 16:21:08.951576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.196 [2024-07-15 16:21:08.951583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.196 [2024-07-15 16:21:08.951597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-07-15 16:21:08.961472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.196 [2024-07-15 16:21:08.961557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.196 [2024-07-15 16:21:08.961573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.196 [2024-07-15 16:21:08.961581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.196 [2024-07-15 16:21:08.961587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.196 [2024-07-15 16:21:08.961601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-07-15 16:21:08.971427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.196 [2024-07-15 16:21:08.971514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.196 [2024-07-15 16:21:08.971530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.196 [2024-07-15 16:21:08.971538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.196 [2024-07-15 16:21:08.971544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.196 [2024-07-15 16:21:08.971558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-07-15 16:21:08.981550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.196 [2024-07-15 16:21:08.981640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.196 [2024-07-15 16:21:08.981656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.196 [2024-07-15 16:21:08.981663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.196 [2024-07-15 16:21:08.981669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.196 [2024-07-15 16:21:08.981683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-07-15 16:21:08.991633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.196 [2024-07-15 16:21:08.991719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.196 [2024-07-15 16:21:08.991735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.196 [2024-07-15 16:21:08.991743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.196 [2024-07-15 16:21:08.991750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.196 [2024-07-15 16:21:08.991765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-07-15 16:21:09.001643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.196 [2024-07-15 16:21:09.001723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.196 [2024-07-15 16:21:09.001739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.196 [2024-07-15 16:21:09.001747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.196 [2024-07-15 16:21:09.001754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.196 [2024-07-15 16:21:09.001768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-07-15 16:21:09.011687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.196 [2024-07-15 16:21:09.011777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.196 [2024-07-15 16:21:09.011794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.196 [2024-07-15 16:21:09.011802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.196 [2024-07-15 16:21:09.011808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.196 [2024-07-15 16:21:09.011823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-07-15 16:21:09.021694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.196 [2024-07-15 16:21:09.021775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.196 [2024-07-15 16:21:09.021791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.196 [2024-07-15 16:21:09.021799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.196 [2024-07-15 16:21:09.021806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.196 [2024-07-15 16:21:09.021820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-07-15 16:21:09.031742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.196 [2024-07-15 16:21:09.031826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.196 [2024-07-15 16:21:09.031842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.196 [2024-07-15 16:21:09.031849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.196 [2024-07-15 16:21:09.031856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.196 [2024-07-15 16:21:09.031871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.459 [2024-07-15 16:21:09.041789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.459 [2024-07-15 16:21:09.041877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.459 [2024-07-15 16:21:09.041893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.459 [2024-07-15 16:21:09.041905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.459 [2024-07-15 16:21:09.041912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.459 [2024-07-15 16:21:09.041926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.459 qpair failed and we were unable to recover it. 00:29:33.459 [2024-07-15 16:21:09.051799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.459 [2024-07-15 16:21:09.051885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.459 [2024-07-15 16:21:09.051902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.459 [2024-07-15 16:21:09.051910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.459 [2024-07-15 16:21:09.051917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.459 [2024-07-15 16:21:09.051931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.459 qpair failed and we were unable to recover it. 00:29:33.459 [2024-07-15 16:21:09.061803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.459 [2024-07-15 16:21:09.061895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.459 [2024-07-15 16:21:09.061911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.459 [2024-07-15 16:21:09.061919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.459 [2024-07-15 16:21:09.061925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.459 [2024-07-15 16:21:09.061939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.459 qpair failed and we were unable to recover it. 00:29:33.459 [2024-07-15 16:21:09.071850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.459 [2024-07-15 16:21:09.071933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.459 [2024-07-15 16:21:09.071951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.459 [2024-07-15 16:21:09.071960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.459 [2024-07-15 16:21:09.071967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.459 [2024-07-15 16:21:09.071982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.459 qpair failed and we were unable to recover it. 00:29:33.459 [2024-07-15 16:21:09.081865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.459 [2024-07-15 16:21:09.081964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.459 [2024-07-15 16:21:09.081981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.459 [2024-07-15 16:21:09.081988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.459 [2024-07-15 16:21:09.081995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.459 [2024-07-15 16:21:09.082009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.459 qpair failed and we were unable to recover it. 00:29:33.459 [2024-07-15 16:21:09.091889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.459 [2024-07-15 16:21:09.091983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.459 [2024-07-15 16:21:09.092000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.459 [2024-07-15 16:21:09.092007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.459 [2024-07-15 16:21:09.092014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.459 [2024-07-15 16:21:09.092029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.459 qpair failed and we were unable to recover it. 00:29:33.459 [2024-07-15 16:21:09.101901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.459 [2024-07-15 16:21:09.101983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.459 [2024-07-15 16:21:09.101999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.459 [2024-07-15 16:21:09.102007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.459 [2024-07-15 16:21:09.102014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.459 [2024-07-15 16:21:09.102028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.459 qpair failed and we were unable to recover it. 00:29:33.459 [2024-07-15 16:21:09.111948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.459 [2024-07-15 16:21:09.112026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.459 [2024-07-15 16:21:09.112042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.459 [2024-07-15 16:21:09.112050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.459 [2024-07-15 16:21:09.112057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.459 [2024-07-15 16:21:09.112071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.459 qpair failed and we were unable to recover it. 00:29:33.459 [2024-07-15 16:21:09.122000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.459 [2024-07-15 16:21:09.122081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.460 [2024-07-15 16:21:09.122097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.460 [2024-07-15 16:21:09.122105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.460 [2024-07-15 16:21:09.122111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.460 [2024-07-15 16:21:09.122129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.460 qpair failed and we were unable to recover it. 00:29:33.460 [2024-07-15 16:21:09.132020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.460 [2024-07-15 16:21:09.132121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.460 [2024-07-15 16:21:09.132143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.460 [2024-07-15 16:21:09.132154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.460 [2024-07-15 16:21:09.132161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.460 [2024-07-15 16:21:09.132176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.460 qpair failed and we were unable to recover it. 00:29:33.460 [2024-07-15 16:21:09.142012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.460 [2024-07-15 16:21:09.142096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.460 [2024-07-15 16:21:09.142113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.460 [2024-07-15 16:21:09.142120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.460 [2024-07-15 16:21:09.142132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.460 [2024-07-15 16:21:09.142147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.460 qpair failed and we were unable to recover it. 00:29:33.460 [2024-07-15 16:21:09.151996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.460 [2024-07-15 16:21:09.152104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.460 [2024-07-15 16:21:09.152120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.460 [2024-07-15 16:21:09.152132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.460 [2024-07-15 16:21:09.152139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.460 [2024-07-15 16:21:09.152154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.460 qpair failed and we were unable to recover it. 00:29:33.460 [2024-07-15 16:21:09.162066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.460 [2024-07-15 16:21:09.162152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.460 [2024-07-15 16:21:09.162169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.460 [2024-07-15 16:21:09.162177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.460 [2024-07-15 16:21:09.162184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.460 [2024-07-15 16:21:09.162198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.460 qpair failed and we were unable to recover it. 00:29:33.460 [2024-07-15 16:21:09.171985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.460 [2024-07-15 16:21:09.172069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.460 [2024-07-15 16:21:09.172085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.460 [2024-07-15 16:21:09.172093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.460 [2024-07-15 16:21:09.172100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.460 [2024-07-15 16:21:09.172114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.460 qpair failed and we were unable to recover it. 00:29:33.460 [2024-07-15 16:21:09.182071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.460 [2024-07-15 16:21:09.182156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.460 [2024-07-15 16:21:09.182173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.460 [2024-07-15 16:21:09.182181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.460 [2024-07-15 16:21:09.182187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.460 [2024-07-15 16:21:09.182202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.460 qpair failed and we were unable to recover it. 00:29:33.460 [2024-07-15 16:21:09.192186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.460 [2024-07-15 16:21:09.192265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.460 [2024-07-15 16:21:09.192281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.460 [2024-07-15 16:21:09.192288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.460 [2024-07-15 16:21:09.192295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.460 [2024-07-15 16:21:09.192309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.460 qpair failed and we were unable to recover it. 00:29:33.460 [2024-07-15 16:21:09.202172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.460 [2024-07-15 16:21:09.202273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.460 [2024-07-15 16:21:09.202289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.460 [2024-07-15 16:21:09.202297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.460 [2024-07-15 16:21:09.202303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.460 [2024-07-15 16:21:09.202317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.460 qpair failed and we were unable to recover it. 00:29:33.460 [2024-07-15 16:21:09.212158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.460 [2024-07-15 16:21:09.212246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.460 [2024-07-15 16:21:09.212262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.460 [2024-07-15 16:21:09.212270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.460 [2024-07-15 16:21:09.212276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.460 [2024-07-15 16:21:09.212291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.460 qpair failed and we were unable to recover it. 00:29:33.460 [2024-07-15 16:21:09.222188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.460 [2024-07-15 16:21:09.222264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.460 [2024-07-15 16:21:09.222283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.460 [2024-07-15 16:21:09.222291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.460 [2024-07-15 16:21:09.222297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.460 [2024-07-15 16:21:09.222312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.460 qpair failed and we were unable to recover it. 00:29:33.460 [2024-07-15 16:21:09.232309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.460 [2024-07-15 16:21:09.232407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.460 [2024-07-15 16:21:09.232424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.460 [2024-07-15 16:21:09.232431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.460 [2024-07-15 16:21:09.232437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.460 [2024-07-15 16:21:09.232452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.460 qpair failed and we were unable to recover it. 00:29:33.460 [2024-07-15 16:21:09.242302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.460 [2024-07-15 16:21:09.242393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.460 [2024-07-15 16:21:09.242410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.460 [2024-07-15 16:21:09.242418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.460 [2024-07-15 16:21:09.242425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.460 [2024-07-15 16:21:09.242440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.460 qpair failed and we were unable to recover it. 00:29:33.460 [2024-07-15 16:21:09.252331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.460 [2024-07-15 16:21:09.252415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.460 [2024-07-15 16:21:09.252431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.460 [2024-07-15 16:21:09.252439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.460 [2024-07-15 16:21:09.252446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.460 [2024-07-15 16:21:09.252461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.460 qpair failed and we were unable to recover it. 00:29:33.460 [2024-07-15 16:21:09.262338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.460 [2024-07-15 16:21:09.262421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.460 [2024-07-15 16:21:09.262437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.461 [2024-07-15 16:21:09.262445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.461 [2024-07-15 16:21:09.262452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.461 [2024-07-15 16:21:09.262466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.461 qpair failed and we were unable to recover it. 00:29:33.461 [2024-07-15 16:21:09.272456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.461 [2024-07-15 16:21:09.272542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.461 [2024-07-15 16:21:09.272558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.461 [2024-07-15 16:21:09.272566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.461 [2024-07-15 16:21:09.272572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.461 [2024-07-15 16:21:09.272586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.461 qpair failed and we were unable to recover it. 00:29:33.461 [2024-07-15 16:21:09.282305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.461 [2024-07-15 16:21:09.282389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.461 [2024-07-15 16:21:09.282405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.461 [2024-07-15 16:21:09.282415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.461 [2024-07-15 16:21:09.282422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.461 [2024-07-15 16:21:09.282436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.461 qpair failed and we were unable to recover it. 00:29:33.461 [2024-07-15 16:21:09.292390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.461 [2024-07-15 16:21:09.292487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.461 [2024-07-15 16:21:09.292504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.461 [2024-07-15 16:21:09.292512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.461 [2024-07-15 16:21:09.292519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.461 [2024-07-15 16:21:09.292534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.461 qpair failed and we were unable to recover it. 00:29:33.723 [2024-07-15 16:21:09.302446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.723 [2024-07-15 16:21:09.302553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.723 [2024-07-15 16:21:09.302570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.723 [2024-07-15 16:21:09.302578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.723 [2024-07-15 16:21:09.302584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.723 [2024-07-15 16:21:09.302599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.723 qpair failed and we were unable to recover it. 00:29:33.723 [2024-07-15 16:21:09.312486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.723 [2024-07-15 16:21:09.312564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.723 [2024-07-15 16:21:09.312585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.723 [2024-07-15 16:21:09.312594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.723 [2024-07-15 16:21:09.312601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.723 [2024-07-15 16:21:09.312615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.723 qpair failed and we were unable to recover it. 00:29:33.723 [2024-07-15 16:21:09.322513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.723 [2024-07-15 16:21:09.322598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.723 [2024-07-15 16:21:09.322615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.723 [2024-07-15 16:21:09.322623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.723 [2024-07-15 16:21:09.322629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.723 [2024-07-15 16:21:09.322643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.723 qpair failed and we were unable to recover it. 00:29:33.723 [2024-07-15 16:21:09.332565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.723 [2024-07-15 16:21:09.332699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.723 [2024-07-15 16:21:09.332715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.723 [2024-07-15 16:21:09.332723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.723 [2024-07-15 16:21:09.332729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.723 [2024-07-15 16:21:09.332743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.723 qpair failed and we were unable to recover it. 00:29:33.723 [2024-07-15 16:21:09.342523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.723 [2024-07-15 16:21:09.342598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.723 [2024-07-15 16:21:09.342615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.723 [2024-07-15 16:21:09.342622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.723 [2024-07-15 16:21:09.342629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.723 [2024-07-15 16:21:09.342643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.723 qpair failed and we were unable to recover it. 00:29:33.723 [2024-07-15 16:21:09.352635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.723 [2024-07-15 16:21:09.352720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.723 [2024-07-15 16:21:09.352736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.723 [2024-07-15 16:21:09.352744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.723 [2024-07-15 16:21:09.352750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.723 [2024-07-15 16:21:09.352768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.723 qpair failed and we were unable to recover it. 00:29:33.723 [2024-07-15 16:21:09.362641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.723 [2024-07-15 16:21:09.362732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.723 [2024-07-15 16:21:09.362758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.723 [2024-07-15 16:21:09.362767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.723 [2024-07-15 16:21:09.362774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.723 [2024-07-15 16:21:09.362793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.723 qpair failed and we were unable to recover it. 00:29:33.723 [2024-07-15 16:21:09.372662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.723 [2024-07-15 16:21:09.372787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.723 [2024-07-15 16:21:09.372812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.723 [2024-07-15 16:21:09.372822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.723 [2024-07-15 16:21:09.372829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.723 [2024-07-15 16:21:09.372848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.723 qpair failed and we were unable to recover it. 00:29:33.723 [2024-07-15 16:21:09.382652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.723 [2024-07-15 16:21:09.382733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.723 [2024-07-15 16:21:09.382758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.723 [2024-07-15 16:21:09.382768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.723 [2024-07-15 16:21:09.382775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.723 [2024-07-15 16:21:09.382794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.723 qpair failed and we were unable to recover it. 00:29:33.723 [2024-07-15 16:21:09.392724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.723 [2024-07-15 16:21:09.392809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.723 [2024-07-15 16:21:09.392827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.723 [2024-07-15 16:21:09.392835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.723 [2024-07-15 16:21:09.392841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.723 [2024-07-15 16:21:09.392857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.723 qpair failed and we were unable to recover it. 00:29:33.723 [2024-07-15 16:21:09.402776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.723 [2024-07-15 16:21:09.402862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.723 [2024-07-15 16:21:09.402883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.723 [2024-07-15 16:21:09.402891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.723 [2024-07-15 16:21:09.402897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.723 [2024-07-15 16:21:09.402912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.723 qpair failed and we were unable to recover it. 00:29:33.723 [2024-07-15 16:21:09.412631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.723 [2024-07-15 16:21:09.412718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.723 [2024-07-15 16:21:09.412744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.724 [2024-07-15 16:21:09.412753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.724 [2024-07-15 16:21:09.412760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.724 [2024-07-15 16:21:09.412779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.724 qpair failed and we were unable to recover it. 00:29:33.724 [2024-07-15 16:21:09.422754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.724 [2024-07-15 16:21:09.422836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.724 [2024-07-15 16:21:09.422861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.724 [2024-07-15 16:21:09.422870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.724 [2024-07-15 16:21:09.422878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.724 [2024-07-15 16:21:09.422898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.724 qpair failed and we were unable to recover it. 00:29:33.724 [2024-07-15 16:21:09.432957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.724 [2024-07-15 16:21:09.433064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.724 [2024-07-15 16:21:09.433089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.724 [2024-07-15 16:21:09.433099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.724 [2024-07-15 16:21:09.433106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.724 [2024-07-15 16:21:09.433129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.724 qpair failed and we were unable to recover it. 00:29:33.724 [2024-07-15 16:21:09.442958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.724 [2024-07-15 16:21:09.443045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.724 [2024-07-15 16:21:09.443065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.724 [2024-07-15 16:21:09.443074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.724 [2024-07-15 16:21:09.443082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.724 [2024-07-15 16:21:09.443106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.724 qpair failed and we were unable to recover it. 00:29:33.724 [2024-07-15 16:21:09.452875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.724 [2024-07-15 16:21:09.452962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.724 [2024-07-15 16:21:09.452979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.724 [2024-07-15 16:21:09.452988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.724 [2024-07-15 16:21:09.452995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.724 [2024-07-15 16:21:09.453011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.724 qpair failed and we were unable to recover it. 00:29:33.724 [2024-07-15 16:21:09.462919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.724 [2024-07-15 16:21:09.463008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.724 [2024-07-15 16:21:09.463024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.724 [2024-07-15 16:21:09.463032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.724 [2024-07-15 16:21:09.463039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.724 [2024-07-15 16:21:09.463053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.724 qpair failed and we were unable to recover it. 00:29:33.724 [2024-07-15 16:21:09.472963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.724 [2024-07-15 16:21:09.473042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.724 [2024-07-15 16:21:09.473058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.724 [2024-07-15 16:21:09.473065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.724 [2024-07-15 16:21:09.473072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.724 [2024-07-15 16:21:09.473086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.724 qpair failed and we were unable to recover it. 00:29:33.724 [2024-07-15 16:21:09.482915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.724 [2024-07-15 16:21:09.483041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.724 [2024-07-15 16:21:09.483057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.724 [2024-07-15 16:21:09.483065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.724 [2024-07-15 16:21:09.483071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.724 [2024-07-15 16:21:09.483085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.724 qpair failed and we were unable to recover it. 00:29:33.724 [2024-07-15 16:21:09.492935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.724 [2024-07-15 16:21:09.493022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.724 [2024-07-15 16:21:09.493045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.724 [2024-07-15 16:21:09.493053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.724 [2024-07-15 16:21:09.493059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.724 [2024-07-15 16:21:09.493074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.724 qpair failed and we were unable to recover it. 00:29:33.724 [2024-07-15 16:21:09.502999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.724 [2024-07-15 16:21:09.503072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.724 [2024-07-15 16:21:09.503088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.724 [2024-07-15 16:21:09.503095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.724 [2024-07-15 16:21:09.503103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.724 [2024-07-15 16:21:09.503117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.724 qpair failed and we were unable to recover it. 00:29:33.724 [2024-07-15 16:21:09.512920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.724 [2024-07-15 16:21:09.512999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.724 [2024-07-15 16:21:09.513015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.724 [2024-07-15 16:21:09.513024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.724 [2024-07-15 16:21:09.513030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.724 [2024-07-15 16:21:09.513044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.724 qpair failed and we were unable to recover it. 00:29:33.724 [2024-07-15 16:21:09.523048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.724 [2024-07-15 16:21:09.523131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.724 [2024-07-15 16:21:09.523149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.724 [2024-07-15 16:21:09.523156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.724 [2024-07-15 16:21:09.523163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.724 [2024-07-15 16:21:09.523177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.724 qpair failed and we were unable to recover it. 00:29:33.724 [2024-07-15 16:21:09.533045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.724 [2024-07-15 16:21:09.533131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.724 [2024-07-15 16:21:09.533148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.724 [2024-07-15 16:21:09.533155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.724 [2024-07-15 16:21:09.533166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.724 [2024-07-15 16:21:09.533181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.724 qpair failed and we were unable to recover it. 00:29:33.724 [2024-07-15 16:21:09.543102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.724 [2024-07-15 16:21:09.543182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.724 [2024-07-15 16:21:09.543199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.724 [2024-07-15 16:21:09.543206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.724 [2024-07-15 16:21:09.543213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.724 [2024-07-15 16:21:09.543227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.724 qpair failed and we were unable to recover it. 00:29:33.724 [2024-07-15 16:21:09.553157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.724 [2024-07-15 16:21:09.553240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.724 [2024-07-15 16:21:09.553256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.724 [2024-07-15 16:21:09.553263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.725 [2024-07-15 16:21:09.553270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.725 [2024-07-15 16:21:09.553284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.725 qpair failed and we were unable to recover it. 00:29:33.725 [2024-07-15 16:21:09.563169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.987 [2024-07-15 16:21:09.563249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.987 [2024-07-15 16:21:09.563265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.987 [2024-07-15 16:21:09.563275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.987 [2024-07-15 16:21:09.563282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.987 [2024-07-15 16:21:09.563297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.987 qpair failed and we were unable to recover it. 00:29:33.987 [2024-07-15 16:21:09.573186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.987 [2024-07-15 16:21:09.573304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.987 [2024-07-15 16:21:09.573321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.987 [2024-07-15 16:21:09.573328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.987 [2024-07-15 16:21:09.573335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.987 [2024-07-15 16:21:09.573349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.987 qpair failed and we were unable to recover it. 00:29:33.987 [2024-07-15 16:21:09.583159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.987 [2024-07-15 16:21:09.583240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.987 [2024-07-15 16:21:09.583257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.987 [2024-07-15 16:21:09.583264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.987 [2024-07-15 16:21:09.583271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.987 [2024-07-15 16:21:09.583285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.987 qpair failed and we were unable to recover it. 00:29:33.987 [2024-07-15 16:21:09.593139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.987 [2024-07-15 16:21:09.593223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.987 [2024-07-15 16:21:09.593240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.987 [2024-07-15 16:21:09.593248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.987 [2024-07-15 16:21:09.593254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.987 [2024-07-15 16:21:09.593269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.987 qpair failed and we were unable to recover it. 00:29:33.987 [2024-07-15 16:21:09.603313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.987 [2024-07-15 16:21:09.603397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.987 [2024-07-15 16:21:09.603413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.987 [2024-07-15 16:21:09.603420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.987 [2024-07-15 16:21:09.603427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.987 [2024-07-15 16:21:09.603441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.987 qpair failed and we were unable to recover it. 00:29:33.987 [2024-07-15 16:21:09.613322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.987 [2024-07-15 16:21:09.613438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.987 [2024-07-15 16:21:09.613455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.987 [2024-07-15 16:21:09.613463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.987 [2024-07-15 16:21:09.613470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.987 [2024-07-15 16:21:09.613484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.987 qpair failed and we were unable to recover it. 00:29:33.987 [2024-07-15 16:21:09.623215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.987 [2024-07-15 16:21:09.623303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.987 [2024-07-15 16:21:09.623319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.987 [2024-07-15 16:21:09.623327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.987 [2024-07-15 16:21:09.623337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.987 [2024-07-15 16:21:09.623352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.987 qpair failed and we were unable to recover it. 00:29:33.987 [2024-07-15 16:21:09.633388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.987 [2024-07-15 16:21:09.633469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.987 [2024-07-15 16:21:09.633485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.987 [2024-07-15 16:21:09.633493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.987 [2024-07-15 16:21:09.633500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.987 [2024-07-15 16:21:09.633515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.987 qpair failed and we were unable to recover it. 00:29:33.987 [2024-07-15 16:21:09.643530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.987 [2024-07-15 16:21:09.643617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.987 [2024-07-15 16:21:09.643633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.987 [2024-07-15 16:21:09.643641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.987 [2024-07-15 16:21:09.643648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.987 [2024-07-15 16:21:09.643663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.987 qpair failed and we were unable to recover it. 00:29:33.987 [2024-07-15 16:21:09.653376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.987 [2024-07-15 16:21:09.653459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.987 [2024-07-15 16:21:09.653475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.987 [2024-07-15 16:21:09.653483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.987 [2024-07-15 16:21:09.653490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.987 [2024-07-15 16:21:09.653504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.987 qpair failed and we were unable to recover it. 00:29:33.987 [2024-07-15 16:21:09.663338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.987 [2024-07-15 16:21:09.663443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.987 [2024-07-15 16:21:09.663460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.988 [2024-07-15 16:21:09.663468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.988 [2024-07-15 16:21:09.663474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.988 [2024-07-15 16:21:09.663489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.988 qpair failed and we were unable to recover it. 00:29:33.988 [2024-07-15 16:21:09.673376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.988 [2024-07-15 16:21:09.673465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.988 [2024-07-15 16:21:09.673484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.988 [2024-07-15 16:21:09.673492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.988 [2024-07-15 16:21:09.673498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.988 [2024-07-15 16:21:09.673514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.988 qpair failed and we were unable to recover it. 00:29:33.988 [2024-07-15 16:21:09.683504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.988 [2024-07-15 16:21:09.683587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.988 [2024-07-15 16:21:09.683603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.988 [2024-07-15 16:21:09.683611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.988 [2024-07-15 16:21:09.683618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.988 [2024-07-15 16:21:09.683632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.988 qpair failed and we were unable to recover it. 00:29:33.988 [2024-07-15 16:21:09.693478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.988 [2024-07-15 16:21:09.693561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.988 [2024-07-15 16:21:09.693577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.988 [2024-07-15 16:21:09.693585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.988 [2024-07-15 16:21:09.693592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.988 [2024-07-15 16:21:09.693606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.988 qpair failed and we were unable to recover it. 00:29:33.988 [2024-07-15 16:21:09.703490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.988 [2024-07-15 16:21:09.703565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.988 [2024-07-15 16:21:09.703582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.988 [2024-07-15 16:21:09.703589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.988 [2024-07-15 16:21:09.703596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.988 [2024-07-15 16:21:09.703610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.988 qpair failed and we were unable to recover it. 00:29:33.988 [2024-07-15 16:21:09.713545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.988 [2024-07-15 16:21:09.713619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.988 [2024-07-15 16:21:09.713635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.988 [2024-07-15 16:21:09.713642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.988 [2024-07-15 16:21:09.713653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.988 [2024-07-15 16:21:09.713667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.988 qpair failed and we were unable to recover it. 00:29:33.988 [2024-07-15 16:21:09.723667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.988 [2024-07-15 16:21:09.723759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.988 [2024-07-15 16:21:09.723775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.988 [2024-07-15 16:21:09.723782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.988 [2024-07-15 16:21:09.723788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.988 [2024-07-15 16:21:09.723802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.988 qpair failed and we were unable to recover it. 00:29:33.988 [2024-07-15 16:21:09.733631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.988 [2024-07-15 16:21:09.733722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.988 [2024-07-15 16:21:09.733748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.988 [2024-07-15 16:21:09.733756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.988 [2024-07-15 16:21:09.733763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.988 [2024-07-15 16:21:09.733783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.988 qpair failed and we were unable to recover it. 00:29:33.988 [2024-07-15 16:21:09.743620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.988 [2024-07-15 16:21:09.743701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.988 [2024-07-15 16:21:09.743719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.988 [2024-07-15 16:21:09.743726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.988 [2024-07-15 16:21:09.743734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.988 [2024-07-15 16:21:09.743749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.988 qpair failed and we were unable to recover it. 00:29:33.988 [2024-07-15 16:21:09.753537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.988 [2024-07-15 16:21:09.753614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.988 [2024-07-15 16:21:09.753630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.988 [2024-07-15 16:21:09.753637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.988 [2024-07-15 16:21:09.753644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.988 [2024-07-15 16:21:09.753661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.988 qpair failed and we were unable to recover it. 00:29:33.988 [2024-07-15 16:21:09.763714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.988 [2024-07-15 16:21:09.763797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.988 [2024-07-15 16:21:09.763814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.988 [2024-07-15 16:21:09.763821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.988 [2024-07-15 16:21:09.763828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.988 [2024-07-15 16:21:09.763842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.988 qpair failed and we were unable to recover it. 00:29:33.988 [2024-07-15 16:21:09.773687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.988 [2024-07-15 16:21:09.773773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.988 [2024-07-15 16:21:09.773800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.988 [2024-07-15 16:21:09.773809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.988 [2024-07-15 16:21:09.773816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.988 [2024-07-15 16:21:09.773835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.988 qpair failed and we were unable to recover it. 00:29:33.988 [2024-07-15 16:21:09.783750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.988 [2024-07-15 16:21:09.783837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.988 [2024-07-15 16:21:09.783863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.988 [2024-07-15 16:21:09.783873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.988 [2024-07-15 16:21:09.783880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.988 [2024-07-15 16:21:09.783899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.988 qpair failed and we were unable to recover it. 00:29:33.989 [2024-07-15 16:21:09.793781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.989 [2024-07-15 16:21:09.793865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.989 [2024-07-15 16:21:09.793891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.989 [2024-07-15 16:21:09.793900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.989 [2024-07-15 16:21:09.793908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.989 [2024-07-15 16:21:09.793927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.989 qpair failed and we were unable to recover it. 00:29:33.989 [2024-07-15 16:21:09.803862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.989 [2024-07-15 16:21:09.803953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.989 [2024-07-15 16:21:09.803978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.989 [2024-07-15 16:21:09.803992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.989 [2024-07-15 16:21:09.804000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.989 [2024-07-15 16:21:09.804019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.989 qpair failed and we were unable to recover it. 00:29:33.989 [2024-07-15 16:21:09.813806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.989 [2024-07-15 16:21:09.813889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.989 [2024-07-15 16:21:09.813907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.989 [2024-07-15 16:21:09.813915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.989 [2024-07-15 16:21:09.813922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.989 [2024-07-15 16:21:09.813937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.989 qpair failed and we were unable to recover it. 00:29:33.989 [2024-07-15 16:21:09.823856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.989 [2024-07-15 16:21:09.823961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.989 [2024-07-15 16:21:09.823979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.989 [2024-07-15 16:21:09.823987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.989 [2024-07-15 16:21:09.823993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:33.989 [2024-07-15 16:21:09.824008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.989 qpair failed and we were unable to recover it. 00:29:34.251 [2024-07-15 16:21:09.833855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.251 [2024-07-15 16:21:09.833938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.251 [2024-07-15 16:21:09.833954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.251 [2024-07-15 16:21:09.833963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.251 [2024-07-15 16:21:09.833969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.251 [2024-07-15 16:21:09.833984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.251 qpair failed and we were unable to recover it. 00:29:34.251 [2024-07-15 16:21:09.843938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.251 [2024-07-15 16:21:09.844017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.251 [2024-07-15 16:21:09.844034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.251 [2024-07-15 16:21:09.844042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.251 [2024-07-15 16:21:09.844049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.251 [2024-07-15 16:21:09.844063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.251 qpair failed and we were unable to recover it. 00:29:34.251 [2024-07-15 16:21:09.853909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.251 [2024-07-15 16:21:09.853988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.251 [2024-07-15 16:21:09.854004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.251 [2024-07-15 16:21:09.854011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.251 [2024-07-15 16:21:09.854019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.251 [2024-07-15 16:21:09.854033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.251 qpair failed and we were unable to recover it. 00:29:34.252 [2024-07-15 16:21:09.863939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.252 [2024-07-15 16:21:09.864015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.252 [2024-07-15 16:21:09.864031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.252 [2024-07-15 16:21:09.864039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.252 [2024-07-15 16:21:09.864046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.252 [2024-07-15 16:21:09.864060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.252 qpair failed and we were unable to recover it. 00:29:34.252 [2024-07-15 16:21:09.874034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.252 [2024-07-15 16:21:09.874126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.252 [2024-07-15 16:21:09.874143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.252 [2024-07-15 16:21:09.874151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.252 [2024-07-15 16:21:09.874159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.252 [2024-07-15 16:21:09.874173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.252 qpair failed and we were unable to recover it. 00:29:34.252 [2024-07-15 16:21:09.884079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.252 [2024-07-15 16:21:09.884163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.252 [2024-07-15 16:21:09.884179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.252 [2024-07-15 16:21:09.884187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.252 [2024-07-15 16:21:09.884194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.252 [2024-07-15 16:21:09.884208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.252 qpair failed and we were unable to recover it. 00:29:34.252 [2024-07-15 16:21:09.894066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.252 [2024-07-15 16:21:09.894148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.252 [2024-07-15 16:21:09.894165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.252 [2024-07-15 16:21:09.894177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.252 [2024-07-15 16:21:09.894183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.252 [2024-07-15 16:21:09.894198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.252 qpair failed and we were unable to recover it. 00:29:34.252 [2024-07-15 16:21:09.904067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.252 [2024-07-15 16:21:09.904143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.252 [2024-07-15 16:21:09.904159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.252 [2024-07-15 16:21:09.904167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.252 [2024-07-15 16:21:09.904174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.252 [2024-07-15 16:21:09.904189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.252 qpair failed and we were unable to recover it. 00:29:34.252 [2024-07-15 16:21:09.914112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.252 [2024-07-15 16:21:09.914194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.252 [2024-07-15 16:21:09.914211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.252 [2024-07-15 16:21:09.914219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.252 [2024-07-15 16:21:09.914225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.252 [2024-07-15 16:21:09.914239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.252 qpair failed and we were unable to recover it. 00:29:34.252 [2024-07-15 16:21:09.924186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.252 [2024-07-15 16:21:09.924268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.252 [2024-07-15 16:21:09.924283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.252 [2024-07-15 16:21:09.924291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.252 [2024-07-15 16:21:09.924299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.252 [2024-07-15 16:21:09.924314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.252 qpair failed and we were unable to recover it. 00:29:34.252 [2024-07-15 16:21:09.934132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.252 [2024-07-15 16:21:09.934214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.252 [2024-07-15 16:21:09.934230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.252 [2024-07-15 16:21:09.934237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.252 [2024-07-15 16:21:09.934244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.252 [2024-07-15 16:21:09.934259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.252 qpair failed and we were unable to recover it. 00:29:34.252 [2024-07-15 16:21:09.944162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.252 [2024-07-15 16:21:09.944478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.252 [2024-07-15 16:21:09.944495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.252 [2024-07-15 16:21:09.944503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.252 [2024-07-15 16:21:09.944509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.252 [2024-07-15 16:21:09.944523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.252 qpair failed and we were unable to recover it. 00:29:34.252 [2024-07-15 16:21:09.954237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.252 [2024-07-15 16:21:09.954331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.252 [2024-07-15 16:21:09.954348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.252 [2024-07-15 16:21:09.954355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.252 [2024-07-15 16:21:09.954362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.252 [2024-07-15 16:21:09.954376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.252 qpair failed and we were unable to recover it. 00:29:34.252 [2024-07-15 16:21:09.964324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.252 [2024-07-15 16:21:09.964398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.252 [2024-07-15 16:21:09.964414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.252 [2024-07-15 16:21:09.964421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.252 [2024-07-15 16:21:09.964429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.252 [2024-07-15 16:21:09.964443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.252 qpair failed and we were unable to recover it. 00:29:34.252 [2024-07-15 16:21:09.974260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.252 [2024-07-15 16:21:09.974350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.252 [2024-07-15 16:21:09.974367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.252 [2024-07-15 16:21:09.974374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.252 [2024-07-15 16:21:09.974380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.252 [2024-07-15 16:21:09.974394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.252 qpair failed and we were unable to recover it. 00:29:34.252 [2024-07-15 16:21:09.984327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.252 [2024-07-15 16:21:09.984488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.252 [2024-07-15 16:21:09.984508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.252 [2024-07-15 16:21:09.984516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.252 [2024-07-15 16:21:09.984522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.252 [2024-07-15 16:21:09.984536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.252 qpair failed and we were unable to recover it. 00:29:34.252 [2024-07-15 16:21:09.994282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.252 [2024-07-15 16:21:09.994355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.252 [2024-07-15 16:21:09.994371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.252 [2024-07-15 16:21:09.994379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.252 [2024-07-15 16:21:09.994387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.252 [2024-07-15 16:21:09.994401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.252 qpair failed and we were unable to recover it. 00:29:34.252 [2024-07-15 16:21:10.004392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.252 [2024-07-15 16:21:10.004467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.253 [2024-07-15 16:21:10.004485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.253 [2024-07-15 16:21:10.004493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.253 [2024-07-15 16:21:10.004499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.253 [2024-07-15 16:21:10.004514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.253 qpair failed and we were unable to recover it. 00:29:34.253 [2024-07-15 16:21:10.014472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.253 [2024-07-15 16:21:10.014564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.253 [2024-07-15 16:21:10.014581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.253 [2024-07-15 16:21:10.014589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.253 [2024-07-15 16:21:10.014595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.253 [2024-07-15 16:21:10.014610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.253 qpair failed and we were unable to recover it. 00:29:34.253 [2024-07-15 16:21:10.024437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.253 [2024-07-15 16:21:10.024523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.253 [2024-07-15 16:21:10.024542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.253 [2024-07-15 16:21:10.024550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.253 [2024-07-15 16:21:10.024557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.253 [2024-07-15 16:21:10.024574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.253 qpair failed and we were unable to recover it. 00:29:34.253 [2024-07-15 16:21:10.034415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.253 [2024-07-15 16:21:10.034490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.253 [2024-07-15 16:21:10.034507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.253 [2024-07-15 16:21:10.034514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.253 [2024-07-15 16:21:10.034521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.253 [2024-07-15 16:21:10.034535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.253 qpair failed and we were unable to recover it. 00:29:34.253 [2024-07-15 16:21:10.044493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.253 [2024-07-15 16:21:10.044617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.253 [2024-07-15 16:21:10.044633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.253 [2024-07-15 16:21:10.044641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.253 [2024-07-15 16:21:10.044647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.253 [2024-07-15 16:21:10.044662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.253 qpair failed and we were unable to recover it. 00:29:34.253 [2024-07-15 16:21:10.054430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.253 [2024-07-15 16:21:10.054510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.253 [2024-07-15 16:21:10.054526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.253 [2024-07-15 16:21:10.054534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.253 [2024-07-15 16:21:10.054541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.253 [2024-07-15 16:21:10.054555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.253 qpair failed and we were unable to recover it. 00:29:34.253 [2024-07-15 16:21:10.064515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.253 [2024-07-15 16:21:10.064642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.253 [2024-07-15 16:21:10.064658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.253 [2024-07-15 16:21:10.064666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.253 [2024-07-15 16:21:10.064673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.253 [2024-07-15 16:21:10.064687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.253 qpair failed and we were unable to recover it. 00:29:34.253 [2024-07-15 16:21:10.074514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.253 [2024-07-15 16:21:10.074587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.253 [2024-07-15 16:21:10.074608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.253 [2024-07-15 16:21:10.074617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.253 [2024-07-15 16:21:10.074623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.253 [2024-07-15 16:21:10.074638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.253 qpair failed and we were unable to recover it. 00:29:34.253 [2024-07-15 16:21:10.084544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.253 [2024-07-15 16:21:10.084624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.253 [2024-07-15 16:21:10.084644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.253 [2024-07-15 16:21:10.084652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.253 [2024-07-15 16:21:10.084659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.253 [2024-07-15 16:21:10.084676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.253 qpair failed and we were unable to recover it. 00:29:34.515 [2024-07-15 16:21:10.094604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.515 [2024-07-15 16:21:10.094685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.515 [2024-07-15 16:21:10.094702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.515 [2024-07-15 16:21:10.094711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.515 [2024-07-15 16:21:10.094717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.515 [2024-07-15 16:21:10.094732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.515 qpair failed and we were unable to recover it. 00:29:34.515 [2024-07-15 16:21:10.104609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.515 [2024-07-15 16:21:10.104683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.515 [2024-07-15 16:21:10.104699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.515 [2024-07-15 16:21:10.104707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.515 [2024-07-15 16:21:10.104714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.515 [2024-07-15 16:21:10.104728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.515 qpair failed and we were unable to recover it. 00:29:34.515 [2024-07-15 16:21:10.114516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.515 [2024-07-15 16:21:10.114587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.515 [2024-07-15 16:21:10.114603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.515 [2024-07-15 16:21:10.114610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.515 [2024-07-15 16:21:10.114617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.515 [2024-07-15 16:21:10.114636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.515 qpair failed and we were unable to recover it. 00:29:34.516 [2024-07-15 16:21:10.124660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.516 [2024-07-15 16:21:10.124734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.516 [2024-07-15 16:21:10.124750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.516 [2024-07-15 16:21:10.124758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.516 [2024-07-15 16:21:10.124765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.516 [2024-07-15 16:21:10.124779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.516 qpair failed and we were unable to recover it. 00:29:34.516 [2024-07-15 16:21:10.134682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.516 [2024-07-15 16:21:10.134760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.516 [2024-07-15 16:21:10.134775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.516 [2024-07-15 16:21:10.134783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.516 [2024-07-15 16:21:10.134790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.516 [2024-07-15 16:21:10.134804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.516 qpair failed and we were unable to recover it. 00:29:34.516 [2024-07-15 16:21:10.144708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.516 [2024-07-15 16:21:10.144786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.516 [2024-07-15 16:21:10.144811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.516 [2024-07-15 16:21:10.144821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.516 [2024-07-15 16:21:10.144828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.516 [2024-07-15 16:21:10.144847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.516 qpair failed and we were unable to recover it. 00:29:34.516 [2024-07-15 16:21:10.154628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.516 [2024-07-15 16:21:10.154707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.516 [2024-07-15 16:21:10.154732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.516 [2024-07-15 16:21:10.154741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.516 [2024-07-15 16:21:10.154748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.516 [2024-07-15 16:21:10.154767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.516 qpair failed and we were unable to recover it. 00:29:34.516 [2024-07-15 16:21:10.164815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.516 [2024-07-15 16:21:10.164945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.516 [2024-07-15 16:21:10.164975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.516 [2024-07-15 16:21:10.164984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.516 [2024-07-15 16:21:10.164991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.516 [2024-07-15 16:21:10.165010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.516 qpair failed and we were unable to recover it. 00:29:34.516 [2024-07-15 16:21:10.174802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.516 [2024-07-15 16:21:10.174884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.516 [2024-07-15 16:21:10.174902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.516 [2024-07-15 16:21:10.174910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.516 [2024-07-15 16:21:10.174916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.516 [2024-07-15 16:21:10.174932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.516 qpair failed and we were unable to recover it. 00:29:34.516 [2024-07-15 16:21:10.184798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.516 [2024-07-15 16:21:10.184877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.516 [2024-07-15 16:21:10.184902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.516 [2024-07-15 16:21:10.184911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.516 [2024-07-15 16:21:10.184918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.516 [2024-07-15 16:21:10.184936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.516 qpair failed and we were unable to recover it. 00:29:34.516 [2024-07-15 16:21:10.194734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.516 [2024-07-15 16:21:10.194814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.516 [2024-07-15 16:21:10.194833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.516 [2024-07-15 16:21:10.194841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.516 [2024-07-15 16:21:10.194849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.516 [2024-07-15 16:21:10.194865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.516 qpair failed and we were unable to recover it. 00:29:34.516 [2024-07-15 16:21:10.204863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.516 [2024-07-15 16:21:10.204937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.516 [2024-07-15 16:21:10.204954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.516 [2024-07-15 16:21:10.204962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.516 [2024-07-15 16:21:10.204969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.516 [2024-07-15 16:21:10.204988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.516 qpair failed and we were unable to recover it. 00:29:34.516 [2024-07-15 16:21:10.214922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.516 [2024-07-15 16:21:10.215001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.516 [2024-07-15 16:21:10.215021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.516 [2024-07-15 16:21:10.215029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.516 [2024-07-15 16:21:10.215036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.516 [2024-07-15 16:21:10.215051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.516 qpair failed and we were unable to recover it. 00:29:34.516 [2024-07-15 16:21:10.224914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.516 [2024-07-15 16:21:10.225012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.516 [2024-07-15 16:21:10.225029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.516 [2024-07-15 16:21:10.225037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.516 [2024-07-15 16:21:10.225044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.516 [2024-07-15 16:21:10.225059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.516 qpair failed and we were unable to recover it. 00:29:34.516 [2024-07-15 16:21:10.234948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.516 [2024-07-15 16:21:10.235058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.516 [2024-07-15 16:21:10.235075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.516 [2024-07-15 16:21:10.235083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.516 [2024-07-15 16:21:10.235090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.516 [2024-07-15 16:21:10.235104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.516 qpair failed and we were unable to recover it. 00:29:34.516 [2024-07-15 16:21:10.244960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.516 [2024-07-15 16:21:10.245034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.516 [2024-07-15 16:21:10.245050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.516 [2024-07-15 16:21:10.245058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.516 [2024-07-15 16:21:10.245064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.516 [2024-07-15 16:21:10.245079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.516 qpair failed and we were unable to recover it. 00:29:34.516 [2024-07-15 16:21:10.254996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.517 [2024-07-15 16:21:10.255076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.517 [2024-07-15 16:21:10.255099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.517 [2024-07-15 16:21:10.255107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.517 [2024-07-15 16:21:10.255114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.517 [2024-07-15 16:21:10.255133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.517 qpair failed and we were unable to recover it. 00:29:34.517 [2024-07-15 16:21:10.265044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.517 [2024-07-15 16:21:10.265120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.517 [2024-07-15 16:21:10.265140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.517 [2024-07-15 16:21:10.265148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.517 [2024-07-15 16:21:10.265154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.517 [2024-07-15 16:21:10.265169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.517 qpair failed and we were unable to recover it. 00:29:34.517 [2024-07-15 16:21:10.275026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.517 [2024-07-15 16:21:10.275099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.517 [2024-07-15 16:21:10.275116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.517 [2024-07-15 16:21:10.275128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.517 [2024-07-15 16:21:10.275135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.517 [2024-07-15 16:21:10.275151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.517 qpair failed and we were unable to recover it. 00:29:34.517 [2024-07-15 16:21:10.285170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.517 [2024-07-15 16:21:10.285247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.517 [2024-07-15 16:21:10.285263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.517 [2024-07-15 16:21:10.285271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.517 [2024-07-15 16:21:10.285277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.517 [2024-07-15 16:21:10.285292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.517 qpair failed and we were unable to recover it. 00:29:34.517 [2024-07-15 16:21:10.295150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.517 [2024-07-15 16:21:10.295233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.517 [2024-07-15 16:21:10.295250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.517 [2024-07-15 16:21:10.295258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.517 [2024-07-15 16:21:10.295267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.517 [2024-07-15 16:21:10.295282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.517 qpair failed and we were unable to recover it. 00:29:34.517 [2024-07-15 16:21:10.305120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.517 [2024-07-15 16:21:10.305198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.517 [2024-07-15 16:21:10.305215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.517 [2024-07-15 16:21:10.305222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.517 [2024-07-15 16:21:10.305229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.517 [2024-07-15 16:21:10.305243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.517 qpair failed and we were unable to recover it. 00:29:34.517 [2024-07-15 16:21:10.315185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.517 [2024-07-15 16:21:10.315260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.517 [2024-07-15 16:21:10.315276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.517 [2024-07-15 16:21:10.315284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.517 [2024-07-15 16:21:10.315291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.517 [2024-07-15 16:21:10.315305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.517 qpair failed and we were unable to recover it. 00:29:34.517 [2024-07-15 16:21:10.325226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.517 [2024-07-15 16:21:10.325301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.517 [2024-07-15 16:21:10.325317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.517 [2024-07-15 16:21:10.325325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.517 [2024-07-15 16:21:10.325332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.517 [2024-07-15 16:21:10.325346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.517 qpair failed and we were unable to recover it. 00:29:34.517 [2024-07-15 16:21:10.335212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.517 [2024-07-15 16:21:10.335292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.517 [2024-07-15 16:21:10.335309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.517 [2024-07-15 16:21:10.335316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.517 [2024-07-15 16:21:10.335323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.517 [2024-07-15 16:21:10.335338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.517 qpair failed and we were unable to recover it. 00:29:34.517 [2024-07-15 16:21:10.345188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.517 [2024-07-15 16:21:10.345274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.517 [2024-07-15 16:21:10.345290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.517 [2024-07-15 16:21:10.345298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.517 [2024-07-15 16:21:10.345304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.517 [2024-07-15 16:21:10.345319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.517 qpair failed and we were unable to recover it. 00:29:34.517 [2024-07-15 16:21:10.355157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.517 [2024-07-15 16:21:10.355238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.517 [2024-07-15 16:21:10.355254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.517 [2024-07-15 16:21:10.355262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.517 [2024-07-15 16:21:10.355268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.517 [2024-07-15 16:21:10.355282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.517 qpair failed and we were unable to recover it. 00:29:34.779 [2024-07-15 16:21:10.365336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.779 [2024-07-15 16:21:10.365423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.779 [2024-07-15 16:21:10.365440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.779 [2024-07-15 16:21:10.365448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.779 [2024-07-15 16:21:10.365454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.779 [2024-07-15 16:21:10.365469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.779 qpair failed and we were unable to recover it. 00:29:34.779 [2024-07-15 16:21:10.375362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.779 [2024-07-15 16:21:10.375443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.779 [2024-07-15 16:21:10.375464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.779 [2024-07-15 16:21:10.375473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.779 [2024-07-15 16:21:10.375480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.779 [2024-07-15 16:21:10.375495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.780 qpair failed and we were unable to recover it. 00:29:34.780 [2024-07-15 16:21:10.385360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.780 [2024-07-15 16:21:10.385435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.780 [2024-07-15 16:21:10.385451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.780 [2024-07-15 16:21:10.385459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.780 [2024-07-15 16:21:10.385470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.780 [2024-07-15 16:21:10.385484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.780 qpair failed and we were unable to recover it. 00:29:34.780 [2024-07-15 16:21:10.395368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.780 [2024-07-15 16:21:10.395443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.780 [2024-07-15 16:21:10.395460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.780 [2024-07-15 16:21:10.395467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.780 [2024-07-15 16:21:10.395474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.780 [2024-07-15 16:21:10.395490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.780 qpair failed and we were unable to recover it. 00:29:34.780 [2024-07-15 16:21:10.405400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.780 [2024-07-15 16:21:10.405478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.780 [2024-07-15 16:21:10.405494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.780 [2024-07-15 16:21:10.405501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.780 [2024-07-15 16:21:10.405508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.780 [2024-07-15 16:21:10.405522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.780 qpair failed and we were unable to recover it. 00:29:34.780 [2024-07-15 16:21:10.415420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.780 [2024-07-15 16:21:10.415500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.780 [2024-07-15 16:21:10.415516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.780 [2024-07-15 16:21:10.415524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.780 [2024-07-15 16:21:10.415531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.780 [2024-07-15 16:21:10.415545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.780 qpair failed and we were unable to recover it. 00:29:34.780 [2024-07-15 16:21:10.425433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.780 [2024-07-15 16:21:10.425504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.780 [2024-07-15 16:21:10.425520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.780 [2024-07-15 16:21:10.425527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.780 [2024-07-15 16:21:10.425534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.780 [2024-07-15 16:21:10.425549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.780 qpair failed and we were unable to recover it. 00:29:34.780 [2024-07-15 16:21:10.435501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.780 [2024-07-15 16:21:10.435618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.780 [2024-07-15 16:21:10.435635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.780 [2024-07-15 16:21:10.435642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.780 [2024-07-15 16:21:10.435648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.780 [2024-07-15 16:21:10.435662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.780 qpair failed and we were unable to recover it. 00:29:34.780 [2024-07-15 16:21:10.445516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.780 [2024-07-15 16:21:10.445612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.780 [2024-07-15 16:21:10.445629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.780 [2024-07-15 16:21:10.445636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.780 [2024-07-15 16:21:10.445643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.780 [2024-07-15 16:21:10.445657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.780 qpair failed and we were unable to recover it. 00:29:34.780 [2024-07-15 16:21:10.455562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.780 [2024-07-15 16:21:10.455689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.780 [2024-07-15 16:21:10.455705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.780 [2024-07-15 16:21:10.455713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.780 [2024-07-15 16:21:10.455719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.780 [2024-07-15 16:21:10.455734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.780 qpair failed and we were unable to recover it. 00:29:34.780 [2024-07-15 16:21:10.465585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.780 [2024-07-15 16:21:10.465694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.780 [2024-07-15 16:21:10.465711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.780 [2024-07-15 16:21:10.465718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.780 [2024-07-15 16:21:10.465725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.780 [2024-07-15 16:21:10.465739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.780 qpair failed and we were unable to recover it. 00:29:34.780 [2024-07-15 16:21:10.475601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.780 [2024-07-15 16:21:10.475680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.780 [2024-07-15 16:21:10.475706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.780 [2024-07-15 16:21:10.475716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.780 [2024-07-15 16:21:10.475727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.780 [2024-07-15 16:21:10.475747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.780 qpair failed and we were unable to recover it. 00:29:34.780 [2024-07-15 16:21:10.485616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.780 [2024-07-15 16:21:10.485698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.780 [2024-07-15 16:21:10.485724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.780 [2024-07-15 16:21:10.485733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.780 [2024-07-15 16:21:10.485740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.780 [2024-07-15 16:21:10.485759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.780 qpair failed and we were unable to recover it. 00:29:34.780 [2024-07-15 16:21:10.495681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.780 [2024-07-15 16:21:10.495761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.780 [2024-07-15 16:21:10.495779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.780 [2024-07-15 16:21:10.495786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.780 [2024-07-15 16:21:10.495793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.780 [2024-07-15 16:21:10.495809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.780 qpair failed and we were unable to recover it. 00:29:34.780 [2024-07-15 16:21:10.505643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.780 [2024-07-15 16:21:10.505722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.780 [2024-07-15 16:21:10.505739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.780 [2024-07-15 16:21:10.505747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.780 [2024-07-15 16:21:10.505753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.780 [2024-07-15 16:21:10.505767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.780 qpair failed and we were unable to recover it. 00:29:34.780 [2024-07-15 16:21:10.515740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.780 [2024-07-15 16:21:10.515832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.780 [2024-07-15 16:21:10.515848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.780 [2024-07-15 16:21:10.515855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.780 [2024-07-15 16:21:10.515862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.781 [2024-07-15 16:21:10.515876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.781 qpair failed and we were unable to recover it. 00:29:34.781 [2024-07-15 16:21:10.525703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.781 [2024-07-15 16:21:10.525778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.781 [2024-07-15 16:21:10.525794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.781 [2024-07-15 16:21:10.525801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.781 [2024-07-15 16:21:10.525808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.781 [2024-07-15 16:21:10.525823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.781 qpair failed and we were unable to recover it. 00:29:34.781 [2024-07-15 16:21:10.535760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.781 [2024-07-15 16:21:10.535839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.781 [2024-07-15 16:21:10.535855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.781 [2024-07-15 16:21:10.535863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.781 [2024-07-15 16:21:10.535870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.781 [2024-07-15 16:21:10.535884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.781 qpair failed and we were unable to recover it. 00:29:34.781 [2024-07-15 16:21:10.545762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.781 [2024-07-15 16:21:10.545835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.781 [2024-07-15 16:21:10.545851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.781 [2024-07-15 16:21:10.545858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.781 [2024-07-15 16:21:10.545866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.781 [2024-07-15 16:21:10.545880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.781 qpair failed and we were unable to recover it. 00:29:34.781 [2024-07-15 16:21:10.555792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.781 [2024-07-15 16:21:10.555862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.781 [2024-07-15 16:21:10.555878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.781 [2024-07-15 16:21:10.555885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.781 [2024-07-15 16:21:10.555893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.781 [2024-07-15 16:21:10.555908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.781 qpair failed and we were unable to recover it. 00:29:34.781 [2024-07-15 16:21:10.565780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.781 [2024-07-15 16:21:10.565855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.781 [2024-07-15 16:21:10.565871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.781 [2024-07-15 16:21:10.565883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.781 [2024-07-15 16:21:10.565889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.781 [2024-07-15 16:21:10.565903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.781 qpair failed and we were unable to recover it. 00:29:34.781 [2024-07-15 16:21:10.575875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.781 [2024-07-15 16:21:10.575953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.781 [2024-07-15 16:21:10.575969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.781 [2024-07-15 16:21:10.575976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.781 [2024-07-15 16:21:10.575983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.781 [2024-07-15 16:21:10.575998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.781 qpair failed and we were unable to recover it. 00:29:34.781 [2024-07-15 16:21:10.585897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.781 [2024-07-15 16:21:10.585974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.781 [2024-07-15 16:21:10.585989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.781 [2024-07-15 16:21:10.585997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.781 [2024-07-15 16:21:10.586004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.781 [2024-07-15 16:21:10.586019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.781 qpair failed and we were unable to recover it. 00:29:34.781 [2024-07-15 16:21:10.595915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.781 [2024-07-15 16:21:10.595992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.781 [2024-07-15 16:21:10.596008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.781 [2024-07-15 16:21:10.596016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.781 [2024-07-15 16:21:10.596023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.781 [2024-07-15 16:21:10.596037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.781 qpair failed and we were unable to recover it. 00:29:34.781 [2024-07-15 16:21:10.605974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.781 [2024-07-15 16:21:10.606083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.781 [2024-07-15 16:21:10.606099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.781 [2024-07-15 16:21:10.606107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.781 [2024-07-15 16:21:10.606113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.781 [2024-07-15 16:21:10.606132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.781 qpair failed and we were unable to recover it. 00:29:34.781 [2024-07-15 16:21:10.615991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.781 [2024-07-15 16:21:10.616068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.781 [2024-07-15 16:21:10.616084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.781 [2024-07-15 16:21:10.616091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.781 [2024-07-15 16:21:10.616098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:34.781 [2024-07-15 16:21:10.616112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.781 qpair failed and we were unable to recover it. 00:29:35.044 [2024-07-15 16:21:10.625943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.044 [2024-07-15 16:21:10.626012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.044 [2024-07-15 16:21:10.626028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.044 [2024-07-15 16:21:10.626036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.044 [2024-07-15 16:21:10.626042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.044 [2024-07-15 16:21:10.626057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.044 qpair failed and we were unable to recover it. 00:29:35.044 [2024-07-15 16:21:10.636038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.044 [2024-07-15 16:21:10.636114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.044 [2024-07-15 16:21:10.636135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.044 [2024-07-15 16:21:10.636143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.044 [2024-07-15 16:21:10.636149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.044 [2024-07-15 16:21:10.636165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.044 qpair failed and we were unable to recover it. 00:29:35.044 [2024-07-15 16:21:10.646054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.044 [2024-07-15 16:21:10.646133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.044 [2024-07-15 16:21:10.646149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.044 [2024-07-15 16:21:10.646157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.044 [2024-07-15 16:21:10.646163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.044 [2024-07-15 16:21:10.646178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.044 qpair failed and we were unable to recover it. 00:29:35.044 [2024-07-15 16:21:10.656072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.044 [2024-07-15 16:21:10.656152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.044 [2024-07-15 16:21:10.656169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.044 [2024-07-15 16:21:10.656181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.044 [2024-07-15 16:21:10.656187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.044 [2024-07-15 16:21:10.656202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.044 qpair failed and we were unable to recover it. 00:29:35.044 [2024-07-15 16:21:10.666082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.044 [2024-07-15 16:21:10.666158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.044 [2024-07-15 16:21:10.666174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.044 [2024-07-15 16:21:10.666182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.044 [2024-07-15 16:21:10.666188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.044 [2024-07-15 16:21:10.666202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.044 qpair failed and we were unable to recover it. 00:29:35.044 [2024-07-15 16:21:10.676140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.044 [2024-07-15 16:21:10.676222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.044 [2024-07-15 16:21:10.676238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.044 [2024-07-15 16:21:10.676245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.044 [2024-07-15 16:21:10.676252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.044 [2024-07-15 16:21:10.676266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.044 qpair failed and we were unable to recover it. 00:29:35.044 [2024-07-15 16:21:10.686225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.044 [2024-07-15 16:21:10.686304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.044 [2024-07-15 16:21:10.686321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.044 [2024-07-15 16:21:10.686328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.044 [2024-07-15 16:21:10.686335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.044 [2024-07-15 16:21:10.686349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.044 qpair failed and we were unable to recover it. 00:29:35.044 [2024-07-15 16:21:10.696294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.044 [2024-07-15 16:21:10.696380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.044 [2024-07-15 16:21:10.696396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.044 [2024-07-15 16:21:10.696404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.044 [2024-07-15 16:21:10.696410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.044 [2024-07-15 16:21:10.696425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.044 qpair failed and we were unable to recover it. 00:29:35.044 [2024-07-15 16:21:10.706208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.044 [2024-07-15 16:21:10.706280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.044 [2024-07-15 16:21:10.706296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.044 [2024-07-15 16:21:10.706304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.044 [2024-07-15 16:21:10.706311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.044 [2024-07-15 16:21:10.706325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.044 qpair failed and we were unable to recover it. 00:29:35.044 [2024-07-15 16:21:10.716134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.045 [2024-07-15 16:21:10.716212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.045 [2024-07-15 16:21:10.716229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.045 [2024-07-15 16:21:10.716237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.045 [2024-07-15 16:21:10.716244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.045 [2024-07-15 16:21:10.716260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.045 qpair failed and we were unable to recover it. 00:29:35.045 [2024-07-15 16:21:10.726230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.045 [2024-07-15 16:21:10.726352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.045 [2024-07-15 16:21:10.726369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.045 [2024-07-15 16:21:10.726376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.045 [2024-07-15 16:21:10.726383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.045 [2024-07-15 16:21:10.726397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.045 qpair failed and we were unable to recover it. 00:29:35.045 [2024-07-15 16:21:10.736368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.045 [2024-07-15 16:21:10.736443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.045 [2024-07-15 16:21:10.736459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.045 [2024-07-15 16:21:10.736467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.045 [2024-07-15 16:21:10.736474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.045 [2024-07-15 16:21:10.736489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.045 qpair failed and we were unable to recover it. 00:29:35.045 [2024-07-15 16:21:10.746358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.045 [2024-07-15 16:21:10.746440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.045 [2024-07-15 16:21:10.746456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.045 [2024-07-15 16:21:10.746468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.045 [2024-07-15 16:21:10.746474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.045 [2024-07-15 16:21:10.746489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.045 qpair failed and we were unable to recover it. 00:29:35.045 [2024-07-15 16:21:10.756412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.045 [2024-07-15 16:21:10.756485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.045 [2024-07-15 16:21:10.756502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.045 [2024-07-15 16:21:10.756509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.045 [2024-07-15 16:21:10.756516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.045 [2024-07-15 16:21:10.756530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.045 qpair failed and we were unable to recover it. 00:29:35.045 [2024-07-15 16:21:10.766405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.045 [2024-07-15 16:21:10.766485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.045 [2024-07-15 16:21:10.766500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.045 [2024-07-15 16:21:10.766508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.045 [2024-07-15 16:21:10.766515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.045 [2024-07-15 16:21:10.766529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.045 qpair failed and we were unable to recover it. 00:29:35.045 [2024-07-15 16:21:10.776401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.045 [2024-07-15 16:21:10.776478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.045 [2024-07-15 16:21:10.776494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.045 [2024-07-15 16:21:10.776501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.045 [2024-07-15 16:21:10.776508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.045 [2024-07-15 16:21:10.776522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.045 qpair failed and we were unable to recover it. 00:29:35.045 [2024-07-15 16:21:10.786317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.045 [2024-07-15 16:21:10.786397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.045 [2024-07-15 16:21:10.786413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.045 [2024-07-15 16:21:10.786421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.045 [2024-07-15 16:21:10.786428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.045 [2024-07-15 16:21:10.786442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.045 qpair failed and we were unable to recover it. 00:29:35.045 [2024-07-15 16:21:10.796453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.045 [2024-07-15 16:21:10.796526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.045 [2024-07-15 16:21:10.796541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.045 [2024-07-15 16:21:10.796549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.045 [2024-07-15 16:21:10.796557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.045 [2024-07-15 16:21:10.796571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.045 qpair failed and we were unable to recover it. 00:29:35.045 [2024-07-15 16:21:10.806526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.045 [2024-07-15 16:21:10.806601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.045 [2024-07-15 16:21:10.806617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.045 [2024-07-15 16:21:10.806626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.045 [2024-07-15 16:21:10.806633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.045 [2024-07-15 16:21:10.806648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.045 qpair failed and we were unable to recover it. 00:29:35.045 [2024-07-15 16:21:10.816581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.045 [2024-07-15 16:21:10.816660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.045 [2024-07-15 16:21:10.816677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.045 [2024-07-15 16:21:10.816684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.045 [2024-07-15 16:21:10.816691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.045 [2024-07-15 16:21:10.816706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.045 qpair failed and we were unable to recover it. 00:29:35.045 [2024-07-15 16:21:10.826492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.045 [2024-07-15 16:21:10.826563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.045 [2024-07-15 16:21:10.826578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.045 [2024-07-15 16:21:10.826585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.045 [2024-07-15 16:21:10.826592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.045 [2024-07-15 16:21:10.826606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.045 qpair failed and we were unable to recover it. 00:29:35.045 [2024-07-15 16:21:10.836554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.045 [2024-07-15 16:21:10.836631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.045 [2024-07-15 16:21:10.836651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.045 [2024-07-15 16:21:10.836659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.045 [2024-07-15 16:21:10.836666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.045 [2024-07-15 16:21:10.836681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.045 qpair failed and we were unable to recover it. 00:29:35.045 [2024-07-15 16:21:10.846578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.045 [2024-07-15 16:21:10.846653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.045 [2024-07-15 16:21:10.846669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.045 [2024-07-15 16:21:10.846677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.045 [2024-07-15 16:21:10.846684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.045 [2024-07-15 16:21:10.846698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.045 qpair failed and we were unable to recover it. 00:29:35.045 [2024-07-15 16:21:10.856646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.045 [2024-07-15 16:21:10.856724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.045 [2024-07-15 16:21:10.856740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.045 [2024-07-15 16:21:10.856748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.046 [2024-07-15 16:21:10.856755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.046 [2024-07-15 16:21:10.856769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.046 qpair failed and we were unable to recover it. 00:29:35.046 [2024-07-15 16:21:10.866657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.046 [2024-07-15 16:21:10.866767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.046 [2024-07-15 16:21:10.866785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.046 [2024-07-15 16:21:10.866795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.046 [2024-07-15 16:21:10.866802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.046 [2024-07-15 16:21:10.866817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.046 qpair failed and we were unable to recover it. 00:29:35.046 [2024-07-15 16:21:10.876668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.046 [2024-07-15 16:21:10.876740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.046 [2024-07-15 16:21:10.876756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.046 [2024-07-15 16:21:10.876764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.046 [2024-07-15 16:21:10.876771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.046 [2024-07-15 16:21:10.876789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.046 qpair failed and we were unable to recover it. 00:29:35.308 [2024-07-15 16:21:10.886759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.308 [2024-07-15 16:21:10.886841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.308 [2024-07-15 16:21:10.886860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.308 [2024-07-15 16:21:10.886869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.308 [2024-07-15 16:21:10.886875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.308 [2024-07-15 16:21:10.886890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.308 qpair failed and we were unable to recover it. 00:29:35.308 [2024-07-15 16:21:10.896758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.308 [2024-07-15 16:21:10.896840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.308 [2024-07-15 16:21:10.896856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.308 [2024-07-15 16:21:10.896865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.308 [2024-07-15 16:21:10.896871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.308 [2024-07-15 16:21:10.896886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.308 qpair failed and we were unable to recover it. 00:29:35.308 [2024-07-15 16:21:10.906775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.308 [2024-07-15 16:21:10.906849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.308 [2024-07-15 16:21:10.906865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.308 [2024-07-15 16:21:10.906872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.308 [2024-07-15 16:21:10.906879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.308 [2024-07-15 16:21:10.906893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.308 qpair failed and we were unable to recover it. 00:29:35.308 [2024-07-15 16:21:10.916824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.308 [2024-07-15 16:21:10.916910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.308 [2024-07-15 16:21:10.916926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.308 [2024-07-15 16:21:10.916933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.308 [2024-07-15 16:21:10.916940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.308 [2024-07-15 16:21:10.916954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.308 qpair failed and we were unable to recover it. 00:29:35.308 [2024-07-15 16:21:10.926800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.308 [2024-07-15 16:21:10.926875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.308 [2024-07-15 16:21:10.926895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.308 [2024-07-15 16:21:10.926902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.308 [2024-07-15 16:21:10.926909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.308 [2024-07-15 16:21:10.926923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.308 qpair failed and we were unable to recover it. 00:29:35.308 [2024-07-15 16:21:10.936735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.308 [2024-07-15 16:21:10.936813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.308 [2024-07-15 16:21:10.936830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.309 [2024-07-15 16:21:10.936838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.309 [2024-07-15 16:21:10.936845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.309 [2024-07-15 16:21:10.936860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.309 qpair failed and we were unable to recover it. 00:29:35.309 [2024-07-15 16:21:10.946891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.309 [2024-07-15 16:21:10.946970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.309 [2024-07-15 16:21:10.946987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.309 [2024-07-15 16:21:10.946996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.309 [2024-07-15 16:21:10.947002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.309 [2024-07-15 16:21:10.947017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.309 qpair failed and we were unable to recover it. 00:29:35.309 [2024-07-15 16:21:10.956888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.309 [2024-07-15 16:21:10.956960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.309 [2024-07-15 16:21:10.956976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.309 [2024-07-15 16:21:10.956984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.309 [2024-07-15 16:21:10.956992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.309 [2024-07-15 16:21:10.957007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.309 qpair failed and we were unable to recover it. 00:29:35.309 [2024-07-15 16:21:10.966948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.309 [2024-07-15 16:21:10.967025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.309 [2024-07-15 16:21:10.967044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.309 [2024-07-15 16:21:10.967052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.309 [2024-07-15 16:21:10.967059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.309 [2024-07-15 16:21:10.967078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.309 qpair failed and we were unable to recover it. 00:29:35.309 [2024-07-15 16:21:10.976959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.309 [2024-07-15 16:21:10.977036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.309 [2024-07-15 16:21:10.977053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.309 [2024-07-15 16:21:10.977060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.309 [2024-07-15 16:21:10.977068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.309 [2024-07-15 16:21:10.977082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.309 qpair failed and we were unable to recover it. 00:29:35.309 [2024-07-15 16:21:10.986962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.309 [2024-07-15 16:21:10.987033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.309 [2024-07-15 16:21:10.987050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.309 [2024-07-15 16:21:10.987057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.309 [2024-07-15 16:21:10.987064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.309 [2024-07-15 16:21:10.987079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.309 qpair failed and we were unable to recover it. 00:29:35.309 [2024-07-15 16:21:10.997048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.309 [2024-07-15 16:21:10.997166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.309 [2024-07-15 16:21:10.997183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.309 [2024-07-15 16:21:10.997191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.309 [2024-07-15 16:21:10.997197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.309 [2024-07-15 16:21:10.997212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.309 qpair failed and we were unable to recover it. 00:29:35.309 [2024-07-15 16:21:11.007108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.309 [2024-07-15 16:21:11.007225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.309 [2024-07-15 16:21:11.007242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.309 [2024-07-15 16:21:11.007250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.309 [2024-07-15 16:21:11.007256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.309 [2024-07-15 16:21:11.007271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.309 qpair failed and we were unable to recover it. 00:29:35.309 [2024-07-15 16:21:11.017044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.309 [2024-07-15 16:21:11.017121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.309 [2024-07-15 16:21:11.017147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.309 [2024-07-15 16:21:11.017155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.309 [2024-07-15 16:21:11.017162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.309 [2024-07-15 16:21:11.017177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.309 qpair failed and we were unable to recover it. 00:29:35.309 [2024-07-15 16:21:11.027072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.309 [2024-07-15 16:21:11.027151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.309 [2024-07-15 16:21:11.027168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.309 [2024-07-15 16:21:11.027176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.309 [2024-07-15 16:21:11.027183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.309 [2024-07-15 16:21:11.027197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.309 qpair failed and we were unable to recover it. 00:29:35.309 [2024-07-15 16:21:11.037117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.309 [2024-07-15 16:21:11.037190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.309 [2024-07-15 16:21:11.037206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.309 [2024-07-15 16:21:11.037214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.309 [2024-07-15 16:21:11.037220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.309 [2024-07-15 16:21:11.037235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.309 qpair failed and we were unable to recover it. 00:29:35.309 [2024-07-15 16:21:11.047144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.309 [2024-07-15 16:21:11.047219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.309 [2024-07-15 16:21:11.047235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.309 [2024-07-15 16:21:11.047242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.309 [2024-07-15 16:21:11.047249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.309 [2024-07-15 16:21:11.047264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.309 qpair failed and we were unable to recover it. 00:29:35.309 [2024-07-15 16:21:11.057147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.309 [2024-07-15 16:21:11.057228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.309 [2024-07-15 16:21:11.057244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.309 [2024-07-15 16:21:11.057251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.309 [2024-07-15 16:21:11.057258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.309 [2024-07-15 16:21:11.057277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.309 qpair failed and we were unable to recover it. 00:29:35.309 [2024-07-15 16:21:11.067194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.309 [2024-07-15 16:21:11.067259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.309 [2024-07-15 16:21:11.067278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.309 [2024-07-15 16:21:11.067285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.309 [2024-07-15 16:21:11.067292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.309 [2024-07-15 16:21:11.067307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.309 qpair failed and we were unable to recover it. 00:29:35.309 [2024-07-15 16:21:11.077133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.309 [2024-07-15 16:21:11.077224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.309 [2024-07-15 16:21:11.077241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.309 [2024-07-15 16:21:11.077249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.309 [2024-07-15 16:21:11.077255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.309 [2024-07-15 16:21:11.077270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.310 qpair failed and we were unable to recover it. 00:29:35.310 [2024-07-15 16:21:11.087264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.310 [2024-07-15 16:21:11.087340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.310 [2024-07-15 16:21:11.087356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.310 [2024-07-15 16:21:11.087363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.310 [2024-07-15 16:21:11.087370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.310 [2024-07-15 16:21:11.087384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.310 qpair failed and we were unable to recover it. 00:29:35.310 [2024-07-15 16:21:11.097256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.310 [2024-07-15 16:21:11.097343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.310 [2024-07-15 16:21:11.097360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.310 [2024-07-15 16:21:11.097368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.310 [2024-07-15 16:21:11.097375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.310 [2024-07-15 16:21:11.097389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.310 qpair failed and we were unable to recover it. 00:29:35.310 [2024-07-15 16:21:11.107258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.310 [2024-07-15 16:21:11.107337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.310 [2024-07-15 16:21:11.107357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.310 [2024-07-15 16:21:11.107365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.310 [2024-07-15 16:21:11.107371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.310 [2024-07-15 16:21:11.107386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.310 qpair failed and we were unable to recover it. 00:29:35.310 [2024-07-15 16:21:11.117359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.310 [2024-07-15 16:21:11.117428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.310 [2024-07-15 16:21:11.117444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.310 [2024-07-15 16:21:11.117451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.310 [2024-07-15 16:21:11.117458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.310 [2024-07-15 16:21:11.117473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.310 qpair failed and we were unable to recover it. 00:29:35.310 [2024-07-15 16:21:11.127344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.310 [2024-07-15 16:21:11.127420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.310 [2024-07-15 16:21:11.127436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.310 [2024-07-15 16:21:11.127443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.310 [2024-07-15 16:21:11.127449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.310 [2024-07-15 16:21:11.127463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.310 qpair failed and we were unable to recover it. 00:29:35.310 [2024-07-15 16:21:11.137419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.310 [2024-07-15 16:21:11.137500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.310 [2024-07-15 16:21:11.137517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.310 [2024-07-15 16:21:11.137525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.310 [2024-07-15 16:21:11.137532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.310 [2024-07-15 16:21:11.137546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.310 qpair failed and we were unable to recover it. 00:29:35.310 [2024-07-15 16:21:11.147404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.310 [2024-07-15 16:21:11.147479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.310 [2024-07-15 16:21:11.147495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.310 [2024-07-15 16:21:11.147503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.310 [2024-07-15 16:21:11.147513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.310 [2024-07-15 16:21:11.147527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.310 qpair failed and we were unable to recover it. 00:29:35.573 [2024-07-15 16:21:11.157450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.573 [2024-07-15 16:21:11.157522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.573 [2024-07-15 16:21:11.157538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.573 [2024-07-15 16:21:11.157545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.573 [2024-07-15 16:21:11.157551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.573 [2024-07-15 16:21:11.157566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.573 qpair failed and we were unable to recover it. 00:29:35.573 [2024-07-15 16:21:11.167491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.573 [2024-07-15 16:21:11.167583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.573 [2024-07-15 16:21:11.167600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.573 [2024-07-15 16:21:11.167607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.573 [2024-07-15 16:21:11.167614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.573 [2024-07-15 16:21:11.167628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.573 qpair failed and we were unable to recover it. 00:29:35.573 [2024-07-15 16:21:11.177504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.573 [2024-07-15 16:21:11.177582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.573 [2024-07-15 16:21:11.177598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.573 [2024-07-15 16:21:11.177606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.573 [2024-07-15 16:21:11.177614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.573 [2024-07-15 16:21:11.177629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.573 qpair failed and we were unable to recover it. 00:29:35.573 [2024-07-15 16:21:11.187522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.573 [2024-07-15 16:21:11.187596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.573 [2024-07-15 16:21:11.187612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.573 [2024-07-15 16:21:11.187619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.573 [2024-07-15 16:21:11.187625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.573 [2024-07-15 16:21:11.187640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.573 qpair failed and we were unable to recover it. 00:29:35.573 [2024-07-15 16:21:11.197564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.573 [2024-07-15 16:21:11.197644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.573 [2024-07-15 16:21:11.197661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.573 [2024-07-15 16:21:11.197668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.573 [2024-07-15 16:21:11.197675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.573 [2024-07-15 16:21:11.197689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.573 qpair failed and we were unable to recover it. 00:29:35.573 [2024-07-15 16:21:11.207673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.573 [2024-07-15 16:21:11.207750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.573 [2024-07-15 16:21:11.207768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.573 [2024-07-15 16:21:11.207776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.573 [2024-07-15 16:21:11.207783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.573 [2024-07-15 16:21:11.207797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.573 qpair failed and we were unable to recover it. 00:29:35.573 [2024-07-15 16:21:11.217597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.573 [2024-07-15 16:21:11.217671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.573 [2024-07-15 16:21:11.217688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.573 [2024-07-15 16:21:11.217696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.573 [2024-07-15 16:21:11.217704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.573 [2024-07-15 16:21:11.217718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.573 qpair failed and we were unable to recover it. 00:29:35.574 [2024-07-15 16:21:11.227515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.574 [2024-07-15 16:21:11.227591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.574 [2024-07-15 16:21:11.227608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.574 [2024-07-15 16:21:11.227615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.574 [2024-07-15 16:21:11.227622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.574 [2024-07-15 16:21:11.227637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.574 qpair failed and we were unable to recover it. 00:29:35.574 [2024-07-15 16:21:11.237664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.574 [2024-07-15 16:21:11.237738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.574 [2024-07-15 16:21:11.237754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.574 [2024-07-15 16:21:11.237761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.574 [2024-07-15 16:21:11.237772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.574 [2024-07-15 16:21:11.237787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.574 qpair failed and we were unable to recover it. 00:29:35.574 [2024-07-15 16:21:11.247680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.574 [2024-07-15 16:21:11.247751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.574 [2024-07-15 16:21:11.247768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.574 [2024-07-15 16:21:11.247775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.574 [2024-07-15 16:21:11.247782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.574 [2024-07-15 16:21:11.247797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.574 qpair failed and we were unable to recover it. 00:29:35.574 [2024-07-15 16:21:11.257707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.574 [2024-07-15 16:21:11.257846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.574 [2024-07-15 16:21:11.257871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.574 [2024-07-15 16:21:11.257880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.574 [2024-07-15 16:21:11.257887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.574 [2024-07-15 16:21:11.257907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.574 qpair failed and we were unable to recover it. 00:29:35.574 [2024-07-15 16:21:11.267740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.574 [2024-07-15 16:21:11.267824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.574 [2024-07-15 16:21:11.267851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.574 [2024-07-15 16:21:11.267860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.574 [2024-07-15 16:21:11.267867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.574 [2024-07-15 16:21:11.267886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.574 qpair failed and we were unable to recover it. 00:29:35.574 [2024-07-15 16:21:11.277752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.574 [2024-07-15 16:21:11.277836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.574 [2024-07-15 16:21:11.277861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.574 [2024-07-15 16:21:11.277871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.574 [2024-07-15 16:21:11.277878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.574 [2024-07-15 16:21:11.277897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.574 qpair failed and we were unable to recover it. 00:29:35.574 [2024-07-15 16:21:11.287825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.574 [2024-07-15 16:21:11.287913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.574 [2024-07-15 16:21:11.287939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.574 [2024-07-15 16:21:11.287948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.574 [2024-07-15 16:21:11.287955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.574 [2024-07-15 16:21:11.287974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.574 qpair failed and we were unable to recover it. 00:29:35.574 [2024-07-15 16:21:11.297816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.574 [2024-07-15 16:21:11.297894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.574 [2024-07-15 16:21:11.297912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.574 [2024-07-15 16:21:11.297921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.574 [2024-07-15 16:21:11.297928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.574 [2024-07-15 16:21:11.297943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.574 qpair failed and we were unable to recover it. 00:29:35.574 [2024-07-15 16:21:11.307849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.574 [2024-07-15 16:21:11.307921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.574 [2024-07-15 16:21:11.307947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.574 [2024-07-15 16:21:11.307956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.574 [2024-07-15 16:21:11.307963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.574 [2024-07-15 16:21:11.307983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.574 qpair failed and we were unable to recover it. 00:29:35.574 [2024-07-15 16:21:11.317913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.574 [2024-07-15 16:21:11.317992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.574 [2024-07-15 16:21:11.318010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.574 [2024-07-15 16:21:11.318018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.574 [2024-07-15 16:21:11.318025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.574 [2024-07-15 16:21:11.318040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.574 qpair failed and we were unable to recover it. 00:29:35.574 [2024-07-15 16:21:11.327953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.574 [2024-07-15 16:21:11.328070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.574 [2024-07-15 16:21:11.328087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.574 [2024-07-15 16:21:11.328099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.574 [2024-07-15 16:21:11.328106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.574 [2024-07-15 16:21:11.328121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.574 qpair failed and we were unable to recover it. 00:29:35.574 [2024-07-15 16:21:11.337928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.574 [2024-07-15 16:21:11.338012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.574 [2024-07-15 16:21:11.338029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.574 [2024-07-15 16:21:11.338037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.574 [2024-07-15 16:21:11.338043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.574 [2024-07-15 16:21:11.338057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.574 qpair failed and we were unable to recover it. 00:29:35.574 [2024-07-15 16:21:11.347939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.574 [2024-07-15 16:21:11.348002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.574 [2024-07-15 16:21:11.348020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.574 [2024-07-15 16:21:11.348028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.574 [2024-07-15 16:21:11.348034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.574 [2024-07-15 16:21:11.348049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.574 qpair failed and we were unable to recover it. 00:29:35.574 [2024-07-15 16:21:11.358007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.574 [2024-07-15 16:21:11.358085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.574 [2024-07-15 16:21:11.358102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.574 [2024-07-15 16:21:11.358110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.574 [2024-07-15 16:21:11.358117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.574 [2024-07-15 16:21:11.358136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.574 qpair failed and we were unable to recover it. 00:29:35.574 [2024-07-15 16:21:11.368025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.574 [2024-07-15 16:21:11.368099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.575 [2024-07-15 16:21:11.368115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.575 [2024-07-15 16:21:11.368126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.575 [2024-07-15 16:21:11.368133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.575 [2024-07-15 16:21:11.368148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.575 qpair failed and we were unable to recover it. 00:29:35.575 [2024-07-15 16:21:11.378015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.575 [2024-07-15 16:21:11.378096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.575 [2024-07-15 16:21:11.378112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.575 [2024-07-15 16:21:11.378120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.575 [2024-07-15 16:21:11.378130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.575 [2024-07-15 16:21:11.378145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.575 qpair failed and we were unable to recover it. 00:29:35.575 [2024-07-15 16:21:11.387936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.575 [2024-07-15 16:21:11.388016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.575 [2024-07-15 16:21:11.388032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.575 [2024-07-15 16:21:11.388040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.575 [2024-07-15 16:21:11.388046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.575 [2024-07-15 16:21:11.388061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.575 qpair failed and we were unable to recover it. 00:29:35.575 [2024-07-15 16:21:11.398088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.575 [2024-07-15 16:21:11.398169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.575 [2024-07-15 16:21:11.398186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.575 [2024-07-15 16:21:11.398193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.575 [2024-07-15 16:21:11.398199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.575 [2024-07-15 16:21:11.398214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.575 qpair failed and we were unable to recover it. 00:29:35.575 [2024-07-15 16:21:11.408137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.575 [2024-07-15 16:21:11.408215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.575 [2024-07-15 16:21:11.408231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.575 [2024-07-15 16:21:11.408238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.575 [2024-07-15 16:21:11.408244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.575 [2024-07-15 16:21:11.408260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.575 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 16:21:11.418141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.836 [2024-07-15 16:21:11.418222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.836 [2024-07-15 16:21:11.418238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.836 [2024-07-15 16:21:11.418250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.836 [2024-07-15 16:21:11.418256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.836 [2024-07-15 16:21:11.418271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 16:21:11.428055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.836 [2024-07-15 16:21:11.428147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.836 [2024-07-15 16:21:11.428164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.836 [2024-07-15 16:21:11.428171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.836 [2024-07-15 16:21:11.428178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.836 [2024-07-15 16:21:11.428193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 16:21:11.438208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.836 [2024-07-15 16:21:11.438284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.836 [2024-07-15 16:21:11.438301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.836 [2024-07-15 16:21:11.438308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.836 [2024-07-15 16:21:11.438315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.836 [2024-07-15 16:21:11.438330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 16:21:11.448216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.836 [2024-07-15 16:21:11.448290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.836 [2024-07-15 16:21:11.448306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.836 [2024-07-15 16:21:11.448313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.836 [2024-07-15 16:21:11.448320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.836 [2024-07-15 16:21:11.448334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 16:21:11.458286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.836 [2024-07-15 16:21:11.458367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.836 [2024-07-15 16:21:11.458383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.836 [2024-07-15 16:21:11.458390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.836 [2024-07-15 16:21:11.458397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.836 [2024-07-15 16:21:11.458411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 16:21:11.468297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.836 [2024-07-15 16:21:11.468461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.836 [2024-07-15 16:21:11.468477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.836 [2024-07-15 16:21:11.468484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.836 [2024-07-15 16:21:11.468491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.836 [2024-07-15 16:21:11.468505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 16:21:11.478308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.836 [2024-07-15 16:21:11.478386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.836 [2024-07-15 16:21:11.478402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.836 [2024-07-15 16:21:11.478411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.836 [2024-07-15 16:21:11.478418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.836 [2024-07-15 16:21:11.478432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 16:21:11.488347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.836 [2024-07-15 16:21:11.488422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.836 [2024-07-15 16:21:11.488437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.836 [2024-07-15 16:21:11.488445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.836 [2024-07-15 16:21:11.488452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.836 [2024-07-15 16:21:11.488466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 16:21:11.498283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.836 [2024-07-15 16:21:11.498372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.836 [2024-07-15 16:21:11.498389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.836 [2024-07-15 16:21:11.498397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.836 [2024-07-15 16:21:11.498403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.836 [2024-07-15 16:21:11.498417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 16:21:11.508424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.836 [2024-07-15 16:21:11.508511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.836 [2024-07-15 16:21:11.508528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.836 [2024-07-15 16:21:11.508540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.836 [2024-07-15 16:21:11.508546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.836 [2024-07-15 16:21:11.508560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 16:21:11.518433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.837 [2024-07-15 16:21:11.518507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.837 [2024-07-15 16:21:11.518523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.837 [2024-07-15 16:21:11.518531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.837 [2024-07-15 16:21:11.518538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.837 [2024-07-15 16:21:11.518552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 16:21:11.528445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.837 [2024-07-15 16:21:11.528522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.837 [2024-07-15 16:21:11.528539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.837 [2024-07-15 16:21:11.528546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.837 [2024-07-15 16:21:11.528553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.837 [2024-07-15 16:21:11.528568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 16:21:11.538364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.837 [2024-07-15 16:21:11.538486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.837 [2024-07-15 16:21:11.538502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.837 [2024-07-15 16:21:11.538509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.837 [2024-07-15 16:21:11.538516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.837 [2024-07-15 16:21:11.538530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 16:21:11.548515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.837 [2024-07-15 16:21:11.548586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.837 [2024-07-15 16:21:11.548602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.837 [2024-07-15 16:21:11.548609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.837 [2024-07-15 16:21:11.548616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.837 [2024-07-15 16:21:11.548631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 16:21:11.558422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.837 [2024-07-15 16:21:11.558503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.837 [2024-07-15 16:21:11.558519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.837 [2024-07-15 16:21:11.558527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.837 [2024-07-15 16:21:11.558533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.837 [2024-07-15 16:21:11.558547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 16:21:11.568574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.837 [2024-07-15 16:21:11.568651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.837 [2024-07-15 16:21:11.568667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.837 [2024-07-15 16:21:11.568674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.837 [2024-07-15 16:21:11.568680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.837 [2024-07-15 16:21:11.568695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 16:21:11.578647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.837 [2024-07-15 16:21:11.578760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.837 [2024-07-15 16:21:11.578776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.837 [2024-07-15 16:21:11.578784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.837 [2024-07-15 16:21:11.578790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.837 [2024-07-15 16:21:11.578804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 16:21:11.588602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.837 [2024-07-15 16:21:11.588685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.837 [2024-07-15 16:21:11.588710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.837 [2024-07-15 16:21:11.588719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.837 [2024-07-15 16:21:11.588726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.837 [2024-07-15 16:21:11.588746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 16:21:11.598523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.837 [2024-07-15 16:21:11.598598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.837 [2024-07-15 16:21:11.598621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.837 [2024-07-15 16:21:11.598629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.837 [2024-07-15 16:21:11.598636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.837 [2024-07-15 16:21:11.598652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 16:21:11.608665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.837 [2024-07-15 16:21:11.608740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.837 [2024-07-15 16:21:11.608757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.837 [2024-07-15 16:21:11.608764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.837 [2024-07-15 16:21:11.608771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.837 [2024-07-15 16:21:11.608786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 16:21:11.618688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.837 [2024-07-15 16:21:11.618776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.837 [2024-07-15 16:21:11.618802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.837 [2024-07-15 16:21:11.618811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.837 [2024-07-15 16:21:11.618818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.837 [2024-07-15 16:21:11.618837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 16:21:11.628741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.837 [2024-07-15 16:21:11.628835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.837 [2024-07-15 16:21:11.628861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.837 [2024-07-15 16:21:11.628870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.837 [2024-07-15 16:21:11.628876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.837 [2024-07-15 16:21:11.628895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 16:21:11.638673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.837 [2024-07-15 16:21:11.638779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.837 [2024-07-15 16:21:11.638804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.837 [2024-07-15 16:21:11.638813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.837 [2024-07-15 16:21:11.638820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.837 [2024-07-15 16:21:11.638845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 16:21:11.648773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.837 [2024-07-15 16:21:11.648852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.837 [2024-07-15 16:21:11.648877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.837 [2024-07-15 16:21:11.648886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.837 [2024-07-15 16:21:11.648893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.837 [2024-07-15 16:21:11.648912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 16:21:11.658807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.837 [2024-07-15 16:21:11.658889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.837 [2024-07-15 16:21:11.658915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.838 [2024-07-15 16:21:11.658924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.838 [2024-07-15 16:21:11.658932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.838 [2024-07-15 16:21:11.658951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 16:21:11.668821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.838 [2024-07-15 16:21:11.668914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.838 [2024-07-15 16:21:11.668939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.838 [2024-07-15 16:21:11.668949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.838 [2024-07-15 16:21:11.668955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:35.838 [2024-07-15 16:21:11.668974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.838 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 16:21:11.678887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.134 [2024-07-15 16:21:11.678968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.134 [2024-07-15 16:21:11.678994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.134 [2024-07-15 16:21:11.679004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.134 [2024-07-15 16:21:11.679011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.134 [2024-07-15 16:21:11.679030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 16:21:11.688882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.134 [2024-07-15 16:21:11.688961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.134 [2024-07-15 16:21:11.688984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.134 [2024-07-15 16:21:11.688992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.134 [2024-07-15 16:21:11.688998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.134 [2024-07-15 16:21:11.689014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 16:21:11.698932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.134 [2024-07-15 16:21:11.699011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.134 [2024-07-15 16:21:11.699029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.134 [2024-07-15 16:21:11.699037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.134 [2024-07-15 16:21:11.699044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.134 [2024-07-15 16:21:11.699059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 16:21:11.708963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.134 [2024-07-15 16:21:11.709036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.134 [2024-07-15 16:21:11.709052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.134 [2024-07-15 16:21:11.709059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.134 [2024-07-15 16:21:11.709066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.134 [2024-07-15 16:21:11.709080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 16:21:11.719011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.134 [2024-07-15 16:21:11.719086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.134 [2024-07-15 16:21:11.719105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.134 [2024-07-15 16:21:11.719112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.134 [2024-07-15 16:21:11.719119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.134 [2024-07-15 16:21:11.719141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 16:21:11.728989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.134 [2024-07-15 16:21:11.729060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.134 [2024-07-15 16:21:11.729077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.134 [2024-07-15 16:21:11.729084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.134 [2024-07-15 16:21:11.729091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.134 [2024-07-15 16:21:11.729109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 16:21:11.739037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.134 [2024-07-15 16:21:11.739116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.134 [2024-07-15 16:21:11.739136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.134 [2024-07-15 16:21:11.739144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.134 [2024-07-15 16:21:11.739150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.134 [2024-07-15 16:21:11.739166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 16:21:11.749038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.134 [2024-07-15 16:21:11.749110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.134 [2024-07-15 16:21:11.749129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.134 [2024-07-15 16:21:11.749136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.134 [2024-07-15 16:21:11.749143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.134 [2024-07-15 16:21:11.749157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 16:21:11.759067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.134 [2024-07-15 16:21:11.759142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.134 [2024-07-15 16:21:11.759158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.134 [2024-07-15 16:21:11.759165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.135 [2024-07-15 16:21:11.759172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.135 [2024-07-15 16:21:11.759187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 16:21:11.769078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.135 [2024-07-15 16:21:11.769154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.135 [2024-07-15 16:21:11.769170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.135 [2024-07-15 16:21:11.769177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.135 [2024-07-15 16:21:11.769184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.135 [2024-07-15 16:21:11.769199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 16:21:11.779157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.135 [2024-07-15 16:21:11.779271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.135 [2024-07-15 16:21:11.779295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.135 [2024-07-15 16:21:11.779303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.135 [2024-07-15 16:21:11.779309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.135 [2024-07-15 16:21:11.779324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 16:21:11.789267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.135 [2024-07-15 16:21:11.789353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.135 [2024-07-15 16:21:11.789370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.135 [2024-07-15 16:21:11.789377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.135 [2024-07-15 16:21:11.789384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.135 [2024-07-15 16:21:11.789398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 16:21:11.799185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.135 [2024-07-15 16:21:11.799261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.135 [2024-07-15 16:21:11.799277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.135 [2024-07-15 16:21:11.799285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.135 [2024-07-15 16:21:11.799292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.135 [2024-07-15 16:21:11.799306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 16:21:11.809233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.135 [2024-07-15 16:21:11.809308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.135 [2024-07-15 16:21:11.809323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.135 [2024-07-15 16:21:11.809331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.135 [2024-07-15 16:21:11.809338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.135 [2024-07-15 16:21:11.809353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 16:21:11.819241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.135 [2024-07-15 16:21:11.819366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.135 [2024-07-15 16:21:11.819383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.135 [2024-07-15 16:21:11.819391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.135 [2024-07-15 16:21:11.819397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.135 [2024-07-15 16:21:11.819415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 16:21:11.829270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.135 [2024-07-15 16:21:11.829346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.135 [2024-07-15 16:21:11.829361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.135 [2024-07-15 16:21:11.829369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.135 [2024-07-15 16:21:11.829375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.135 [2024-07-15 16:21:11.829390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 16:21:11.839328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.135 [2024-07-15 16:21:11.839405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.135 [2024-07-15 16:21:11.839421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.135 [2024-07-15 16:21:11.839429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.135 [2024-07-15 16:21:11.839436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.135 [2024-07-15 16:21:11.839451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 16:21:11.849308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.135 [2024-07-15 16:21:11.849383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.135 [2024-07-15 16:21:11.849399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.135 [2024-07-15 16:21:11.849406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.135 [2024-07-15 16:21:11.849414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.135 [2024-07-15 16:21:11.849429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 16:21:11.859353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.135 [2024-07-15 16:21:11.859432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.135 [2024-07-15 16:21:11.859448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.135 [2024-07-15 16:21:11.859456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.135 [2024-07-15 16:21:11.859463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.135 [2024-07-15 16:21:11.859476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 16:21:11.869395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.135 [2024-07-15 16:21:11.869470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.135 [2024-07-15 16:21:11.869489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.135 [2024-07-15 16:21:11.869497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.135 [2024-07-15 16:21:11.869503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.135 [2024-07-15 16:21:11.869517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 16:21:11.879288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.135 [2024-07-15 16:21:11.879356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.135 [2024-07-15 16:21:11.879370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.135 [2024-07-15 16:21:11.879378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.135 [2024-07-15 16:21:11.879384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.135 [2024-07-15 16:21:11.879398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 16:21:11.889452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.135 [2024-07-15 16:21:11.889526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.135 [2024-07-15 16:21:11.889541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.135 [2024-07-15 16:21:11.889549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.135 [2024-07-15 16:21:11.889556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.135 [2024-07-15 16:21:11.889570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 16:21:11.899457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.135 [2024-07-15 16:21:11.899536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.135 [2024-07-15 16:21:11.899552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.135 [2024-07-15 16:21:11.899560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.135 [2024-07-15 16:21:11.899567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.135 [2024-07-15 16:21:11.899582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 16:21:11.909480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.136 [2024-07-15 16:21:11.909552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.136 [2024-07-15 16:21:11.909568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.136 [2024-07-15 16:21:11.909576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.136 [2024-07-15 16:21:11.909586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.136 [2024-07-15 16:21:11.909600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 16:21:11.919519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.136 [2024-07-15 16:21:11.919597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.136 [2024-07-15 16:21:11.919613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.136 [2024-07-15 16:21:11.919622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.136 [2024-07-15 16:21:11.919628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.136 [2024-07-15 16:21:11.919642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 16:21:11.929647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.136 [2024-07-15 16:21:11.929736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.136 [2024-07-15 16:21:11.929753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.136 [2024-07-15 16:21:11.929760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.136 [2024-07-15 16:21:11.929767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.136 [2024-07-15 16:21:11.929781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 16:21:11.939538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.136 [2024-07-15 16:21:11.939635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.136 [2024-07-15 16:21:11.939650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.136 [2024-07-15 16:21:11.939658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.136 [2024-07-15 16:21:11.939665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.136 [2024-07-15 16:21:11.939679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.398 [2024-07-15 16:21:11.949584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.398 [2024-07-15 16:21:11.949654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.398 [2024-07-15 16:21:11.949670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.398 [2024-07-15 16:21:11.949677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.398 [2024-07-15 16:21:11.949684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.398 [2024-07-15 16:21:11.949699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.398 qpair failed and we were unable to recover it. 00:29:36.398 [2024-07-15 16:21:11.959657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.398 [2024-07-15 16:21:11.959736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.398 [2024-07-15 16:21:11.959752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.398 [2024-07-15 16:21:11.959759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.398 [2024-07-15 16:21:11.959766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.398 [2024-07-15 16:21:11.959780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.398 qpair failed and we were unable to recover it. 00:29:36.398 [2024-07-15 16:21:11.969687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.398 [2024-07-15 16:21:11.969770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.398 [2024-07-15 16:21:11.969796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.398 [2024-07-15 16:21:11.969806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.398 [2024-07-15 16:21:11.969813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.398 [2024-07-15 16:21:11.969832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.398 qpair failed and we were unable to recover it. 00:29:36.398 [2024-07-15 16:21:11.979712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.398 [2024-07-15 16:21:11.979824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.398 [2024-07-15 16:21:11.979850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.399 [2024-07-15 16:21:11.979859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.399 [2024-07-15 16:21:11.979866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.399 [2024-07-15 16:21:11.979885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.399 qpair failed and we were unable to recover it. 00:29:36.399 [2024-07-15 16:21:11.989685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.399 [2024-07-15 16:21:11.989768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.399 [2024-07-15 16:21:11.989794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.399 [2024-07-15 16:21:11.989803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.399 [2024-07-15 16:21:11.989810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.399 [2024-07-15 16:21:11.989829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.399 qpair failed and we were unable to recover it. 00:29:36.399 [2024-07-15 16:21:11.999708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.399 [2024-07-15 16:21:11.999787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.399 [2024-07-15 16:21:11.999812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.399 [2024-07-15 16:21:11.999821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.399 [2024-07-15 16:21:11.999832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.399 [2024-07-15 16:21:11.999852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.399 qpair failed and we were unable to recover it. 00:29:36.399 [2024-07-15 16:21:12.009764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.399 [2024-07-15 16:21:12.009840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.399 [2024-07-15 16:21:12.009858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.399 [2024-07-15 16:21:12.009865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.399 [2024-07-15 16:21:12.009872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.399 [2024-07-15 16:21:12.009888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.399 qpair failed and we were unable to recover it. 00:29:36.399 [2024-07-15 16:21:12.019798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.399 [2024-07-15 16:21:12.019875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.399 [2024-07-15 16:21:12.019891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.399 [2024-07-15 16:21:12.019899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.399 [2024-07-15 16:21:12.019905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.399 [2024-07-15 16:21:12.019920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.399 qpair failed and we were unable to recover it. 00:29:36.399 [2024-07-15 16:21:12.029807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.399 [2024-07-15 16:21:12.029880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.399 [2024-07-15 16:21:12.029897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.399 [2024-07-15 16:21:12.029904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.399 [2024-07-15 16:21:12.029911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.399 [2024-07-15 16:21:12.029926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.399 qpair failed and we were unable to recover it. 00:29:36.399 [2024-07-15 16:21:12.039893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.399 [2024-07-15 16:21:12.039969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.399 [2024-07-15 16:21:12.039985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.399 [2024-07-15 16:21:12.039992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.399 [2024-07-15 16:21:12.039999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.399 [2024-07-15 16:21:12.040013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.399 qpair failed and we were unable to recover it. 00:29:36.399 [2024-07-15 16:21:12.049796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.399 [2024-07-15 16:21:12.049877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.399 [2024-07-15 16:21:12.049893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.399 [2024-07-15 16:21:12.049901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.399 [2024-07-15 16:21:12.049908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.399 [2024-07-15 16:21:12.049922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.399 qpair failed and we were unable to recover it. 00:29:36.399 [2024-07-15 16:21:12.059893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.399 [2024-07-15 16:21:12.059974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.399 [2024-07-15 16:21:12.059989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.399 [2024-07-15 16:21:12.059997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.399 [2024-07-15 16:21:12.060003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.399 [2024-07-15 16:21:12.060017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.399 qpair failed and we were unable to recover it. 00:29:36.399 [2024-07-15 16:21:12.069936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.399 [2024-07-15 16:21:12.070013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.399 [2024-07-15 16:21:12.070031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.399 [2024-07-15 16:21:12.070039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.399 [2024-07-15 16:21:12.070046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.399 [2024-07-15 16:21:12.070061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.399 qpair failed and we were unable to recover it. 00:29:36.399 [2024-07-15 16:21:12.079948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.399 [2024-07-15 16:21:12.080019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.399 [2024-07-15 16:21:12.080035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.399 [2024-07-15 16:21:12.080042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.399 [2024-07-15 16:21:12.080049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.399 [2024-07-15 16:21:12.080063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.399 qpair failed and we were unable to recover it. 00:29:36.399 [2024-07-15 16:21:12.089973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.399 [2024-07-15 16:21:12.090080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.399 [2024-07-15 16:21:12.090097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.399 [2024-07-15 16:21:12.090108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.399 [2024-07-15 16:21:12.090114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.399 [2024-07-15 16:21:12.090132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.399 qpair failed and we were unable to recover it. 00:29:36.399 [2024-07-15 16:21:12.099999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.399 [2024-07-15 16:21:12.100077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.399 [2024-07-15 16:21:12.100093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.399 [2024-07-15 16:21:12.100101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.399 [2024-07-15 16:21:12.100107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.399 [2024-07-15 16:21:12.100126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.399 qpair failed and we were unable to recover it. 00:29:36.399 [2024-07-15 16:21:12.110015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.399 [2024-07-15 16:21:12.110093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.399 [2024-07-15 16:21:12.110109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.399 [2024-07-15 16:21:12.110116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.399 [2024-07-15 16:21:12.110126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.399 [2024-07-15 16:21:12.110142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.399 qpair failed and we were unable to recover it. 00:29:36.399 [2024-07-15 16:21:12.120041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.399 [2024-07-15 16:21:12.120115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.399 [2024-07-15 16:21:12.120135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.399 [2024-07-15 16:21:12.120143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.400 [2024-07-15 16:21:12.120149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.400 [2024-07-15 16:21:12.120164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.400 qpair failed and we were unable to recover it. 00:29:36.400 [2024-07-15 16:21:12.130103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.400 [2024-07-15 16:21:12.130181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.400 [2024-07-15 16:21:12.130198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.400 [2024-07-15 16:21:12.130206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.400 [2024-07-15 16:21:12.130213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.400 [2024-07-15 16:21:12.130227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.400 qpair failed and we were unable to recover it. 00:29:36.400 [2024-07-15 16:21:12.140094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.400 [2024-07-15 16:21:12.140171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.400 [2024-07-15 16:21:12.140188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.400 [2024-07-15 16:21:12.140196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.400 [2024-07-15 16:21:12.140202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.400 [2024-07-15 16:21:12.140217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.400 qpair failed and we were unable to recover it. 00:29:36.400 [2024-07-15 16:21:12.150031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.400 [2024-07-15 16:21:12.150104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.400 [2024-07-15 16:21:12.150120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.400 [2024-07-15 16:21:12.150131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.400 [2024-07-15 16:21:12.150138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.400 [2024-07-15 16:21:12.150152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.400 qpair failed and we were unable to recover it. 00:29:36.400 [2024-07-15 16:21:12.160168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.400 [2024-07-15 16:21:12.160241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.400 [2024-07-15 16:21:12.160256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.400 [2024-07-15 16:21:12.160264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.400 [2024-07-15 16:21:12.160270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.400 [2024-07-15 16:21:12.160284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.400 qpair failed and we were unable to recover it. 00:29:36.400 [2024-07-15 16:21:12.170183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.400 [2024-07-15 16:21:12.170258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.400 [2024-07-15 16:21:12.170274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.400 [2024-07-15 16:21:12.170281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.400 [2024-07-15 16:21:12.170287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.400 [2024-07-15 16:21:12.170301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.400 qpair failed and we were unable to recover it. 00:29:36.400 [2024-07-15 16:21:12.180218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.400 [2024-07-15 16:21:12.180297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.400 [2024-07-15 16:21:12.180313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.400 [2024-07-15 16:21:12.180324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.400 [2024-07-15 16:21:12.180330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.400 [2024-07-15 16:21:12.180345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.400 qpair failed and we were unable to recover it. 00:29:36.400 [2024-07-15 16:21:12.190255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.400 [2024-07-15 16:21:12.190362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.400 [2024-07-15 16:21:12.190378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.400 [2024-07-15 16:21:12.190386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.400 [2024-07-15 16:21:12.190392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.400 [2024-07-15 16:21:12.190407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.400 qpair failed and we were unable to recover it. 00:29:36.400 [2024-07-15 16:21:12.200268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.400 [2024-07-15 16:21:12.200346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.400 [2024-07-15 16:21:12.200362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.400 [2024-07-15 16:21:12.200369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.400 [2024-07-15 16:21:12.200376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.400 [2024-07-15 16:21:12.200390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.400 qpair failed and we were unable to recover it. 00:29:36.400 [2024-07-15 16:21:12.210300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.400 [2024-07-15 16:21:12.210374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.400 [2024-07-15 16:21:12.210390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.400 [2024-07-15 16:21:12.210398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.400 [2024-07-15 16:21:12.210404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.400 [2024-07-15 16:21:12.210418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.400 qpair failed and we were unable to recover it. 00:29:36.400 [2024-07-15 16:21:12.220333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.400 [2024-07-15 16:21:12.220412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.400 [2024-07-15 16:21:12.220428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.400 [2024-07-15 16:21:12.220435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.400 [2024-07-15 16:21:12.220441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.400 [2024-07-15 16:21:12.220456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.400 qpair failed and we were unable to recover it. 00:29:36.400 [2024-07-15 16:21:12.230246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.400 [2024-07-15 16:21:12.230321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.400 [2024-07-15 16:21:12.230337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.400 [2024-07-15 16:21:12.230344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.400 [2024-07-15 16:21:12.230350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.400 [2024-07-15 16:21:12.230364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.400 qpair failed and we were unable to recover it. 00:29:36.663 [2024-07-15 16:21:12.240398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.663 [2024-07-15 16:21:12.240471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.663 [2024-07-15 16:21:12.240487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.663 [2024-07-15 16:21:12.240494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.663 [2024-07-15 16:21:12.240501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.663 [2024-07-15 16:21:12.240516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.663 qpair failed and we were unable to recover it. 00:29:36.663 [2024-07-15 16:21:12.250415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.663 [2024-07-15 16:21:12.250499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.663 [2024-07-15 16:21:12.250515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.663 [2024-07-15 16:21:12.250522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.663 [2024-07-15 16:21:12.250528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.663 [2024-07-15 16:21:12.250542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.663 qpair failed and we were unable to recover it. 00:29:36.663 [2024-07-15 16:21:12.260474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.663 [2024-07-15 16:21:12.260559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.663 [2024-07-15 16:21:12.260575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.663 [2024-07-15 16:21:12.260582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.663 [2024-07-15 16:21:12.260588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.663 [2024-07-15 16:21:12.260603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.663 qpair failed and we were unable to recover it. 00:29:36.663 [2024-07-15 16:21:12.270465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.663 [2024-07-15 16:21:12.270534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.663 [2024-07-15 16:21:12.270550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.663 [2024-07-15 16:21:12.270561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.663 [2024-07-15 16:21:12.270567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.663 [2024-07-15 16:21:12.270581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.663 qpair failed and we were unable to recover it. 00:29:36.663 [2024-07-15 16:21:12.280391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.663 [2024-07-15 16:21:12.280468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.663 [2024-07-15 16:21:12.280485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.663 [2024-07-15 16:21:12.280493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.663 [2024-07-15 16:21:12.280499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.663 [2024-07-15 16:21:12.280514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.663 qpair failed and we were unable to recover it. 00:29:36.663 [2024-07-15 16:21:12.290551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.663 [2024-07-15 16:21:12.290627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.663 [2024-07-15 16:21:12.290643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.663 [2024-07-15 16:21:12.290651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.663 [2024-07-15 16:21:12.290657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.663 [2024-07-15 16:21:12.290671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.663 qpair failed and we were unable to recover it. 00:29:36.663 [2024-07-15 16:21:12.300546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.663 [2024-07-15 16:21:12.300623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.663 [2024-07-15 16:21:12.300639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.663 [2024-07-15 16:21:12.300646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.663 [2024-07-15 16:21:12.300652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.663 [2024-07-15 16:21:12.300667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.663 qpair failed and we were unable to recover it. 00:29:36.663 [2024-07-15 16:21:12.310562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.663 [2024-07-15 16:21:12.310635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.663 [2024-07-15 16:21:12.310656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.663 [2024-07-15 16:21:12.310663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.663 [2024-07-15 16:21:12.310670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.663 [2024-07-15 16:21:12.310685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.663 qpair failed and we were unable to recover it. 00:29:36.663 [2024-07-15 16:21:12.320636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.663 [2024-07-15 16:21:12.320715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.663 [2024-07-15 16:21:12.320731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.663 [2024-07-15 16:21:12.320739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.663 [2024-07-15 16:21:12.320745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.663 [2024-07-15 16:21:12.320760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.663 qpair failed and we were unable to recover it. 00:29:36.663 [2024-07-15 16:21:12.330621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.663 [2024-07-15 16:21:12.330694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.663 [2024-07-15 16:21:12.330710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.663 [2024-07-15 16:21:12.330718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.663 [2024-07-15 16:21:12.330724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.663 [2024-07-15 16:21:12.330738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.663 qpair failed and we were unable to recover it. 00:29:36.663 [2024-07-15 16:21:12.340676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.663 [2024-07-15 16:21:12.340762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.663 [2024-07-15 16:21:12.340778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.663 [2024-07-15 16:21:12.340785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.663 [2024-07-15 16:21:12.340791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.663 [2024-07-15 16:21:12.340806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.663 qpair failed and we were unable to recover it. 00:29:36.663 [2024-07-15 16:21:12.350670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.663 [2024-07-15 16:21:12.350743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.663 [2024-07-15 16:21:12.350758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.663 [2024-07-15 16:21:12.350766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.663 [2024-07-15 16:21:12.350772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.663 [2024-07-15 16:21:12.350785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.664 qpair failed and we were unable to recover it. 00:29:36.664 [2024-07-15 16:21:12.360702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.664 [2024-07-15 16:21:12.360808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.664 [2024-07-15 16:21:12.360827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.664 [2024-07-15 16:21:12.360835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.664 [2024-07-15 16:21:12.360841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.664 [2024-07-15 16:21:12.360856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.664 qpair failed and we were unable to recover it. 00:29:36.664 [2024-07-15 16:21:12.370769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.664 [2024-07-15 16:21:12.370854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.664 [2024-07-15 16:21:12.370870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.664 [2024-07-15 16:21:12.370878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.664 [2024-07-15 16:21:12.370884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.664 [2024-07-15 16:21:12.370898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.664 qpair failed and we were unable to recover it. 00:29:36.664 [2024-07-15 16:21:12.380745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.664 [2024-07-15 16:21:12.380824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.664 [2024-07-15 16:21:12.380840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.664 [2024-07-15 16:21:12.380847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.664 [2024-07-15 16:21:12.380854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.664 [2024-07-15 16:21:12.380868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.664 qpair failed and we were unable to recover it. 00:29:36.664 [2024-07-15 16:21:12.390771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.664 [2024-07-15 16:21:12.390845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.664 [2024-07-15 16:21:12.390861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.664 [2024-07-15 16:21:12.390868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.664 [2024-07-15 16:21:12.390874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.664 [2024-07-15 16:21:12.390888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.664 qpair failed and we were unable to recover it. 00:29:36.664 [2024-07-15 16:21:12.400794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.664 [2024-07-15 16:21:12.400864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.664 [2024-07-15 16:21:12.400880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.664 [2024-07-15 16:21:12.400887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.664 [2024-07-15 16:21:12.400893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.664 [2024-07-15 16:21:12.400907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.664 qpair failed and we were unable to recover it. 00:29:36.664 [2024-07-15 16:21:12.410823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.664 [2024-07-15 16:21:12.410895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.664 [2024-07-15 16:21:12.410911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.664 [2024-07-15 16:21:12.410918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.664 [2024-07-15 16:21:12.410925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1477220 00:29:36.664 [2024-07-15 16:21:12.410939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.664 qpair failed and we were unable to recover it. 00:29:36.664 [2024-07-15 16:21:12.411331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484f20 is same with the state(5) to be set 00:29:36.664 [2024-07-15 16:21:12.420857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.664 [2024-07-15 16:21:12.420934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.664 [2024-07-15 16:21:12.420952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.664 [2024-07-15 16:21:12.420959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.664 [2024-07-15 16:21:12.420964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f724c000b90 00:29:36.664 [2024-07-15 16:21:12.420979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.664 qpair failed and we were unable to recover it. 00:29:36.664 [2024-07-15 16:21:12.430877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.664 [2024-07-15 16:21:12.430935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.664 [2024-07-15 16:21:12.430949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.664 [2024-07-15 16:21:12.430954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.664 [2024-07-15 16:21:12.430959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f724c000b90 00:29:36.664 [2024-07-15 16:21:12.430971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.664 qpair failed and we were unable to recover it. 00:29:36.664 [2024-07-15 16:21:12.440984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.664 [2024-07-15 16:21:12.441182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.664 [2024-07-15 16:21:12.441247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.664 [2024-07-15 16:21:12.441272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.664 [2024-07-15 16:21:12.441292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:36.664 [2024-07-15 16:21:12.441348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.664 qpair failed and we were unable to recover it. 00:29:36.664 [2024-07-15 16:21:12.451046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.664 [2024-07-15 16:21:12.451231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.664 [2024-07-15 16:21:12.451276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.664 [2024-07-15 16:21:12.451296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.664 [2024-07-15 16:21:12.451316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7254000b90 00:29:36.664 [2024-07-15 16:21:12.451359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.664 qpair failed and we were unable to recover it. 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Write completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Write completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Write completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Write completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Write completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Write completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Write completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Write completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Read completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 Write completed with error (sct=0, sc=8) 00:29:36.664 starting I/O failed 00:29:36.664 [2024-07-15 16:21:12.452229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.665 [2024-07-15 16:21:12.460963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.665 [2024-07-15 16:21:12.461076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.665 [2024-07-15 16:21:12.461142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.665 [2024-07-15 16:21:12.461166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.665 [2024-07-15 16:21:12.461186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7244000b90 00:29:36.665 [2024-07-15 16:21:12.461234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.665 qpair failed and we were unable to recover it. 00:29:36.665 [2024-07-15 16:21:12.471055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.665 [2024-07-15 16:21:12.471197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.665 [2024-07-15 16:21:12.471237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.665 [2024-07-15 16:21:12.471254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.665 [2024-07-15 16:21:12.471268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7244000b90 00:29:36.665 [2024-07-15 16:21:12.471300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.665 qpair failed and we were unable to recover it. 00:29:36.665 [2024-07-15 16:21:12.471628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1484f20 (9): Bad file descriptor 00:29:36.665 Initializing NVMe Controllers 00:29:36.665 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:36.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:36.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:36.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:36.665 Initialization complete. Launching workers. 00:29:36.665 Starting thread on core 1 00:29:36.665 Starting thread on core 2 00:29:36.665 Starting thread on core 3 00:29:36.665 Starting thread on core 0 00:29:36.665 16:21:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:36.665 00:29:36.665 real 0m11.290s 00:29:36.665 user 0m21.060s 00:29:36.665 sys 0m4.056s 00:29:36.665 16:21:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:36.665 16:21:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.665 ************************************ 00:29:36.665 END TEST nvmf_target_disconnect_tc2 00:29:36.665 ************************************ 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:36.926 rmmod nvme_tcp 00:29:36.926 rmmod nvme_fabrics 00:29:36.926 rmmod nvme_keyring 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2477897 ']' 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2477897 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2477897 ']' 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2477897 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2477897 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2477897' 00:29:36.926 killing process with pid 2477897 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2477897 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2477897 00:29:36.926 16:21:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:37.187 16:21:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:37.187 16:21:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:37.187 16:21:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:37.187 16:21:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:37.187 16:21:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.187 16:21:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:37.187 16:21:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.110 16:21:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:39.110 00:29:39.110 real 0m21.239s 00:29:39.110 user 0m48.463s 00:29:39.110 sys 0m9.746s 00:29:39.110 16:21:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:39.110 16:21:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:39.110 ************************************ 00:29:39.110 END TEST nvmf_target_disconnect 00:29:39.110 ************************************ 00:29:39.110 16:21:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:39.110 16:21:14 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:29:39.110 16:21:14 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:39.110 16:21:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.110 16:21:14 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:29:39.110 00:29:39.110 real 22m37.686s 00:29:39.110 user 47m13.515s 00:29:39.110 sys 7m8.478s 00:29:39.110 16:21:14 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:39.110 16:21:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.110 ************************************ 00:29:39.110 END TEST nvmf_tcp 00:29:39.110 ************************************ 00:29:39.371 16:21:14 -- common/autotest_common.sh@1142 -- # return 0 00:29:39.371 16:21:14 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:39.371 16:21:14 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:39.371 16:21:14 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:39.371 16:21:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:39.371 16:21:14 -- common/autotest_common.sh@10 -- # set +x 00:29:39.371 ************************************ 00:29:39.371 START TEST spdkcli_nvmf_tcp 00:29:39.371 ************************************ 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:39.371 * Looking for test storage... 00:29:39.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.371 16:21:15 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2480183 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2480183 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2480183 ']' 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.372 16:21:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:39.372 [2024-07-15 16:21:15.155916] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:29:39.372 [2024-07-15 16:21:15.155968] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2480183 ] 00:29:39.372 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.372 [2024-07-15 16:21:15.209074] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:39.632 [2024-07-15 16:21:15.275476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.632 [2024-07-15 16:21:15.275564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.202 16:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:40.202 16:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:29:40.202 16:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:40.202 16:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:40.202 16:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:40.202 16:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:40.202 16:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:40.202 16:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:40.202 16:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:40.202 16:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:40.202 16:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:40.202 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:40.202 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:40.202 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:40.202 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:40.202 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:40.202 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:40.202 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:40.202 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:40.202 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:40.202 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:40.202 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:40.202 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:40.202 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:40.203 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:40.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:40.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:40.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:40.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:40.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:40.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:40.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:40.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:40.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:40.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:40.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:40.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:40.203 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:40.203 ' 00:29:42.741 [2024-07-15 16:21:18.283801] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.681 [2024-07-15 16:21:19.447600] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:46.223 [2024-07-15 16:21:21.589975] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:47.607 [2024-07-15 16:21:23.427257] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:49.519 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:49.519 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:49.519 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:49.519 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:49.519 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:49.519 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:49.519 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:49.519 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:49.519 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:49.519 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:49.519 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:49.519 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:49.519 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:49.519 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:49.519 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:49.519 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:49.519 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:49.519 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:49.519 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:49.519 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:49.519 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:49.519 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:49.519 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:49.519 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:49.519 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:49.519 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:49.519 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:49.519 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:49.519 16:21:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:49.519 16:21:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:49.519 16:21:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:49.519 16:21:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:49.519 16:21:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:49.519 16:21:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:49.519 16:21:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:49.519 16:21:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:49.519 16:21:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:49.780 16:21:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:49.780 16:21:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:49.780 16:21:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:49.780 16:21:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:49.780 16:21:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:49.780 16:21:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:49.780 16:21:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:49.780 16:21:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:49.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:49.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:49.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:49.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:49.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:49.780 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:49.780 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:49.780 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:49.780 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:49.780 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:49.780 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:49.780 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:49.780 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:49.780 ' 00:29:55.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:55.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:55.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:55.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:55.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:55.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:55.068 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:55.068 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:55.068 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:55.068 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:55.068 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:55.068 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:55.068 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:55.068 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:55.068 16:21:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:55.068 16:21:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:55.068 16:21:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:55.068 16:21:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2480183 00:29:55.068 16:21:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2480183 ']' 00:29:55.068 16:21:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2480183 00:29:55.068 16:21:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:29:55.068 16:21:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:55.068 16:21:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2480183 00:29:55.328 16:21:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:55.328 16:21:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:55.328 16:21:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2480183' 00:29:55.328 killing process with pid 2480183 00:29:55.328 16:21:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2480183 00:29:55.328 16:21:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2480183 00:29:55.328 16:21:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:55.328 16:21:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:55.328 16:21:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2480183 ']' 00:29:55.328 16:21:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2480183 00:29:55.328 16:21:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2480183 ']' 00:29:55.328 16:21:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2480183 00:29:55.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2480183) - No such process 00:29:55.328 16:21:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2480183 is not found' 00:29:55.328 Process with pid 2480183 is not found 00:29:55.328 16:21:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:55.328 16:21:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:55.329 16:21:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:55.329 00:29:55.329 real 0m16.080s 00:29:55.329 user 0m33.941s 00:29:55.329 sys 0m0.771s 00:29:55.329 16:21:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:55.329 16:21:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:55.329 ************************************ 00:29:55.329 END TEST spdkcli_nvmf_tcp 00:29:55.329 ************************************ 00:29:55.329 16:21:31 -- common/autotest_common.sh@1142 -- # return 0 00:29:55.329 16:21:31 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:55.329 16:21:31 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:55.329 16:21:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:55.329 16:21:31 -- common/autotest_common.sh@10 -- # set +x 00:29:55.329 ************************************ 00:29:55.329 START TEST nvmf_identify_passthru 00:29:55.329 ************************************ 00:29:55.329 16:21:31 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:55.590 * Looking for test storage... 00:29:55.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:55.590 16:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.590 16:21:31 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.590 16:21:31 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.590 16:21:31 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.590 16:21:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.590 16:21:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.590 16:21:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.590 16:21:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:55.590 16:21:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:55.590 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:55.590 16:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.590 16:21:31 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.590 16:21:31 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.590 16:21:31 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.590 16:21:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.590 16:21:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.590 16:21:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.590 16:21:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:55.591 16:21:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.591 16:21:31 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:55.591 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:55.591 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:55.591 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:55.591 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:55.591 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:55.591 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.591 16:21:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:55.591 16:21:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.591 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:55.591 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:55.591 16:21:31 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:55.591 16:21:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:02.188 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:02.189 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:02.189 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:02.189 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:02.189 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:02.189 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:02.449 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:02.449 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:02.449 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:02.449 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:02.449 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:02.449 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:02.449 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:02.711 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:02.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:02.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:30:02.711 00:30:02.711 --- 10.0.0.2 ping statistics --- 00:30:02.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.711 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:30:02.711 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:02.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:02.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.477 ms 00:30:02.711 00:30:02.711 --- 10.0.0.1 ping statistics --- 00:30:02.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.711 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:30:02.711 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:02.711 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:02.711 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:02.711 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:02.711 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:02.711 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:02.711 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:02.711 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:02.711 16:21:38 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:02.711 16:21:38 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:02.711 16:21:38 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:02.711 16:21:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:02.711 16:21:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:02.711 16:21:38 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:02.711 16:21:38 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:02.711 16:21:38 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:02.711 16:21:38 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:02.711 16:21:38 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:02.711 16:21:38 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:02.711 16:21:38 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:02.711 16:21:38 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:02.711 16:21:38 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:02.711 16:21:38 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:02.711 16:21:38 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:30:02.711 16:21:38 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:30:02.711 16:21:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:02.711 16:21:38 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:02.711 16:21:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:02.711 16:21:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:02.711 16:21:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:02.711 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.283 16:21:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:30:03.283 16:21:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:03.283 16:21:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:03.283 16:21:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:03.283 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.543 16:21:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:03.543 16:21:39 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:03.543 16:21:39 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:03.543 16:21:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:03.808 16:21:39 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:03.808 16:21:39 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:03.808 16:21:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:03.808 16:21:39 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2487009 00:30:03.808 16:21:39 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:03.808 16:21:39 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2487009 00:30:03.808 16:21:39 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2487009 ']' 00:30:03.808 16:21:39 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.808 16:21:39 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:03.808 16:21:39 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.808 16:21:39 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:03.808 16:21:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:03.808 16:21:39 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:03.808 [2024-07-15 16:21:39.470649] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:30:03.808 [2024-07-15 16:21:39.470700] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:03.808 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.808 [2024-07-15 16:21:39.535130] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:03.808 [2024-07-15 16:21:39.600374] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:03.808 [2024-07-15 16:21:39.600409] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:03.808 [2024-07-15 16:21:39.600416] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:03.808 [2024-07-15 16:21:39.600423] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:03.808 [2024-07-15 16:21:39.600428] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:03.808 [2024-07-15 16:21:39.600567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.808 [2024-07-15 16:21:39.600687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:03.809 [2024-07-15 16:21:39.600847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.809 [2024-07-15 16:21:39.600848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:04.419 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:04.419 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:30:04.419 16:21:40 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:04.419 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.419 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:04.419 INFO: Log level set to 20 00:30:04.419 INFO: Requests: 00:30:04.419 { 00:30:04.419 "jsonrpc": "2.0", 00:30:04.419 "method": "nvmf_set_config", 00:30:04.419 "id": 1, 00:30:04.419 "params": { 00:30:04.419 "admin_cmd_passthru": { 00:30:04.419 "identify_ctrlr": true 00:30:04.419 } 00:30:04.419 } 00:30:04.419 } 00:30:04.419 00:30:04.419 INFO: response: 00:30:04.419 { 00:30:04.419 "jsonrpc": "2.0", 00:30:04.419 "id": 1, 00:30:04.419 "result": true 00:30:04.419 } 00:30:04.419 00:30:04.419 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.679 16:21:40 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:04.679 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.679 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:04.679 INFO: Setting log level to 20 00:30:04.679 INFO: Setting log level to 20 00:30:04.679 INFO: Log level set to 20 00:30:04.679 INFO: Log level set to 20 00:30:04.679 INFO: Requests: 00:30:04.679 { 00:30:04.679 "jsonrpc": "2.0", 00:30:04.679 "method": "framework_start_init", 00:30:04.679 "id": 1 00:30:04.679 } 00:30:04.679 00:30:04.679 INFO: Requests: 00:30:04.679 { 00:30:04.679 "jsonrpc": "2.0", 00:30:04.679 "method": "framework_start_init", 00:30:04.679 "id": 1 00:30:04.679 } 00:30:04.679 00:30:04.679 [2024-07-15 16:21:40.325538] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:04.679 INFO: response: 00:30:04.679 { 00:30:04.679 "jsonrpc": "2.0", 00:30:04.679 "id": 1, 00:30:04.679 "result": true 00:30:04.679 } 00:30:04.679 00:30:04.679 INFO: response: 00:30:04.679 { 00:30:04.679 "jsonrpc": "2.0", 00:30:04.679 "id": 1, 00:30:04.679 "result": true 00:30:04.679 } 00:30:04.679 00:30:04.679 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.679 16:21:40 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:04.679 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.679 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:04.679 INFO: Setting log level to 40 00:30:04.679 INFO: Setting log level to 40 00:30:04.679 INFO: Setting log level to 40 00:30:04.679 [2024-07-15 16:21:40.338858] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.679 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.679 16:21:40 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:04.679 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:04.679 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:04.679 16:21:40 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:04.679 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.679 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:04.939 Nvme0n1 00:30:04.939 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.939 16:21:40 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:04.939 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.939 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:04.939 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.939 16:21:40 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:04.939 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.939 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:04.939 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.939 16:21:40 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:04.939 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.939 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:04.939 [2024-07-15 16:21:40.725439] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.939 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.940 16:21:40 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:04.940 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.940 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:04.940 [ 00:30:04.940 { 00:30:04.940 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:04.940 "subtype": "Discovery", 00:30:04.940 "listen_addresses": [], 00:30:04.940 "allow_any_host": true, 00:30:04.940 "hosts": [] 00:30:04.940 }, 00:30:04.940 { 00:30:04.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.940 "subtype": "NVMe", 00:30:04.940 "listen_addresses": [ 00:30:04.940 { 00:30:04.940 "trtype": "TCP", 00:30:04.940 "adrfam": "IPv4", 00:30:04.940 "traddr": "10.0.0.2", 00:30:04.940 "trsvcid": "4420" 00:30:04.940 } 00:30:04.940 ], 00:30:04.940 "allow_any_host": true, 00:30:04.940 "hosts": [], 00:30:04.940 "serial_number": "SPDK00000000000001", 00:30:04.940 "model_number": "SPDK bdev Controller", 00:30:04.940 "max_namespaces": 1, 00:30:04.940 "min_cntlid": 1, 00:30:04.940 "max_cntlid": 65519, 00:30:04.940 "namespaces": [ 00:30:04.940 { 00:30:04.940 "nsid": 1, 00:30:04.940 "bdev_name": "Nvme0n1", 00:30:04.940 "name": "Nvme0n1", 00:30:04.940 "nguid": "36344730526054870025384500000044", 00:30:04.940 "uuid": "36344730-5260-5487-0025-384500000044" 00:30:04.940 } 00:30:04.940 ] 00:30:04.940 } 00:30:04.940 ] 00:30:04.940 16:21:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.940 16:21:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:04.940 16:21:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:04.940 16:21:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:05.200 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.462 16:21:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:30:05.462 16:21:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:05.462 16:21:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:05.462 16:21:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:05.462 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.462 16:21:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:05.462 16:21:41 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:30:05.462 16:21:41 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:05.462 16:21:41 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:05.462 16:21:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.462 16:21:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:05.462 16:21:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.462 16:21:41 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:05.462 16:21:41 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:05.462 16:21:41 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:05.462 16:21:41 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:05.723 16:21:41 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:05.723 16:21:41 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:05.723 16:21:41 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:05.723 16:21:41 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:05.723 rmmod nvme_tcp 00:30:05.723 rmmod nvme_fabrics 00:30:05.723 rmmod nvme_keyring 00:30:05.723 16:21:41 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:05.723 16:21:41 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:05.723 16:21:41 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:05.723 16:21:41 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2487009 ']' 00:30:05.723 16:21:41 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2487009 00:30:05.723 16:21:41 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2487009 ']' 00:30:05.723 16:21:41 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2487009 00:30:05.723 16:21:41 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:30:05.723 16:21:41 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:05.723 16:21:41 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2487009 00:30:05.723 16:21:41 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:05.723 16:21:41 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:05.723 16:21:41 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2487009' 00:30:05.723 killing process with pid 2487009 00:30:05.723 16:21:41 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2487009 00:30:05.723 16:21:41 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2487009 00:30:05.984 16:21:41 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:05.984 16:21:41 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:05.984 16:21:41 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:05.984 16:21:41 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:05.984 16:21:41 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:05.984 16:21:41 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.984 16:21:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:05.984 16:21:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.530 16:21:43 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:08.530 00:30:08.530 real 0m12.608s 00:30:08.530 user 0m10.586s 00:30:08.530 sys 0m5.965s 00:30:08.530 16:21:43 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:08.530 16:21:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:08.530 ************************************ 00:30:08.530 END TEST nvmf_identify_passthru 00:30:08.530 ************************************ 00:30:08.530 16:21:43 -- common/autotest_common.sh@1142 -- # return 0 00:30:08.530 16:21:43 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:08.530 16:21:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:08.530 16:21:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:08.530 16:21:43 -- common/autotest_common.sh@10 -- # set +x 00:30:08.530 ************************************ 00:30:08.530 START TEST nvmf_dif 00:30:08.530 ************************************ 00:30:08.530 16:21:43 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:08.530 * Looking for test storage... 00:30:08.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:08.530 16:21:43 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.530 16:21:43 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:08.530 16:21:43 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.530 16:21:43 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.530 16:21:43 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.530 16:21:43 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.530 16:21:43 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.530 16:21:43 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.530 16:21:43 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.530 16:21:43 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.530 16:21:43 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.530 16:21:43 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.530 16:21:43 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:08.530 16:21:43 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:08.530 16:21:43 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.530 16:21:43 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.530 16:21:43 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.530 16:21:43 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.531 16:21:43 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.531 16:21:43 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.531 16:21:43 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.531 16:21:43 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.531 16:21:43 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.531 16:21:43 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.531 16:21:43 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:08.531 16:21:43 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:08.531 16:21:43 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:08.531 16:21:43 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:08.531 16:21:43 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:08.531 16:21:43 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:08.531 16:21:43 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.531 16:21:43 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:08.531 16:21:43 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:08.531 16:21:43 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:08.531 16:21:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:15.122 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:15.122 16:21:50 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:15.122 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:15.123 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:15.123 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:15.123 16:21:50 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:15.385 16:21:51 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:15.385 16:21:51 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:15.385 16:21:51 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:15.385 16:21:51 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:15.385 16:21:51 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:15.385 16:21:51 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:15.385 16:21:51 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:15.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:15.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:30:15.385 00:30:15.385 --- 10.0.0.2 ping statistics --- 00:30:15.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.385 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:30:15.385 16:21:51 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:15.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:15.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.414 ms 00:30:15.385 00:30:15.385 --- 10.0.0.1 ping statistics --- 00:30:15.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.385 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:30:15.385 16:21:51 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:15.385 16:21:51 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:15.385 16:21:51 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:15.385 16:21:51 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:18.692 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:18.692 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:18.692 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:18.692 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:18.692 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:18.692 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:18.692 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:18.692 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:18.692 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:18.692 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:18.692 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:18.692 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:18.692 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:18.692 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:18.692 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:18.954 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:18.954 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:19.216 16:21:54 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:19.216 16:21:54 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:19.216 16:21:54 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:19.216 16:21:54 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:19.216 16:21:54 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:19.216 16:21:54 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:19.216 16:21:54 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:19.216 16:21:54 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:19.216 16:21:54 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:19.216 16:21:54 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:19.216 16:21:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:19.216 16:21:54 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2493112 00:30:19.216 16:21:54 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2493112 00:30:19.216 16:21:54 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:19.216 16:21:54 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2493112 ']' 00:30:19.216 16:21:54 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.216 16:21:54 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:19.216 16:21:54 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.216 16:21:54 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:19.216 16:21:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:19.216 [2024-07-15 16:21:54.975916] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:30:19.216 [2024-07-15 16:21:54.975979] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:19.216 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.216 [2024-07-15 16:21:55.047331] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.477 [2024-07-15 16:21:55.121756] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:19.477 [2024-07-15 16:21:55.121793] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:19.477 [2024-07-15 16:21:55.121800] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:19.477 [2024-07-15 16:21:55.121807] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:19.477 [2024-07-15 16:21:55.121812] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:19.477 [2024-07-15 16:21:55.121831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.049 16:21:55 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:20.049 16:21:55 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:30:20.049 16:21:55 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:20.049 16:21:55 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:20.049 16:21:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:20.049 16:21:55 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:20.049 16:21:55 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:20.049 16:21:55 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:20.049 16:21:55 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.049 16:21:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:20.049 [2024-07-15 16:21:55.788623] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:20.049 16:21:55 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.049 16:21:55 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:20.049 16:21:55 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:20.049 16:21:55 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:20.050 16:21:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:20.050 ************************************ 00:30:20.050 START TEST fio_dif_1_default 00:30:20.050 ************************************ 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:20.050 bdev_null0 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:20.050 [2024-07-15 16:21:55.876963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:20.050 { 00:30:20.050 "params": { 00:30:20.050 "name": "Nvme$subsystem", 00:30:20.050 "trtype": "$TEST_TRANSPORT", 00:30:20.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.050 "adrfam": "ipv4", 00:30:20.050 "trsvcid": "$NVMF_PORT", 00:30:20.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.050 "hdgst": ${hdgst:-false}, 00:30:20.050 "ddgst": ${ddgst:-false} 00:30:20.050 }, 00:30:20.050 "method": "bdev_nvme_attach_controller" 00:30:20.050 } 00:30:20.050 EOF 00:30:20.050 )") 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:20.050 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:20.311 16:21:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:20.311 16:21:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:20.311 16:21:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:20.311 "params": { 00:30:20.311 "name": "Nvme0", 00:30:20.311 "trtype": "tcp", 00:30:20.311 "traddr": "10.0.0.2", 00:30:20.311 "adrfam": "ipv4", 00:30:20.311 "trsvcid": "4420", 00:30:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:20.311 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:20.311 "hdgst": false, 00:30:20.311 "ddgst": false 00:30:20.311 }, 00:30:20.311 "method": "bdev_nvme_attach_controller" 00:30:20.311 }' 00:30:20.311 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:20.311 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:20.311 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.311 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:20.311 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:20.311 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:20.311 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:20.311 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:20.311 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:20.311 16:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:20.572 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:20.572 fio-3.35 00:30:20.572 Starting 1 thread 00:30:20.572 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.823 00:30:32.823 filename0: (groupid=0, jobs=1): err= 0: pid=2493643: Mon Jul 15 16:22:06 2024 00:30:32.823 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10019msec) 00:30:32.823 slat (nsec): min=5403, max=32565, avg=6161.04, stdev=1372.75 00:30:32.823 clat (usec): min=944, max=43503, avg=21574.83, stdev=20141.98 00:30:32.823 lat (usec): min=952, max=43535, avg=21580.99, stdev=20141.97 00:30:32.823 clat percentiles (usec): 00:30:32.823 | 1.00th=[ 1270], 5.00th=[ 1303], 10.00th=[ 1319], 20.00th=[ 1352], 00:30:32.823 | 30.00th=[ 1369], 40.00th=[ 1385], 50.00th=[41681], 60.00th=[41681], 00:30:32.823 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:32.823 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:30:32.823 | 99.99th=[43254] 00:30:32.823 bw ( KiB/s): min= 672, max= 768, per=99.87%, avg=740.80, stdev=33.28, samples=20 00:30:32.823 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:30:32.823 lat (usec) : 1000=0.22% 00:30:32.823 lat (msec) : 2=49.57%, 50=50.22% 00:30:32.823 cpu : usr=95.13%, sys=4.68%, ctx=10, majf=0, minf=225 00:30:32.823 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:32.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.823 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.823 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:32.823 00:30:32.823 Run status group 0 (all jobs): 00:30:32.823 READ: bw=741KiB/s (759kB/s), 741KiB/s-741KiB/s (759kB/s-759kB/s), io=7424KiB (7602kB), run=10019-10019msec 00:30:32.823 16:22:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:32.823 16:22:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:32.823 16:22:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:32.823 16:22:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:32.823 16:22:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:32.823 16:22:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:32.823 16:22:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.823 16:22:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:32.823 16:22:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.823 16:22:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:32.823 16:22:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.823 16:22:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:32.823 16:22:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.823 00:30:32.823 real 0m11.092s 00:30:32.823 user 0m24.701s 00:30:32.823 sys 0m0.762s 00:30:32.823 16:22:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:32.823 16:22:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:32.823 ************************************ 00:30:32.823 END TEST fio_dif_1_default 00:30:32.823 ************************************ 00:30:32.823 16:22:06 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:32.823 16:22:06 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:32.823 16:22:06 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:32.823 16:22:06 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.823 16:22:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:32.823 ************************************ 00:30:32.823 START TEST fio_dif_1_multi_subsystems 00:30:32.823 ************************************ 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.823 bdev_null0 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.823 [2024-07-15 16:22:07.046788] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.823 bdev_null1 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:32.823 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:32.824 { 00:30:32.824 "params": { 00:30:32.824 "name": "Nvme$subsystem", 00:30:32.824 "trtype": "$TEST_TRANSPORT", 00:30:32.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:32.824 "adrfam": "ipv4", 00:30:32.824 "trsvcid": "$NVMF_PORT", 00:30:32.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:32.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:32.824 "hdgst": ${hdgst:-false}, 00:30:32.824 "ddgst": ${ddgst:-false} 00:30:32.824 }, 00:30:32.824 "method": "bdev_nvme_attach_controller" 00:30:32.824 } 00:30:32.824 EOF 00:30:32.824 )") 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:32.824 { 00:30:32.824 "params": { 00:30:32.824 "name": "Nvme$subsystem", 00:30:32.824 "trtype": "$TEST_TRANSPORT", 00:30:32.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:32.824 "adrfam": "ipv4", 00:30:32.824 "trsvcid": "$NVMF_PORT", 00:30:32.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:32.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:32.824 "hdgst": ${hdgst:-false}, 00:30:32.824 "ddgst": ${ddgst:-false} 00:30:32.824 }, 00:30:32.824 "method": "bdev_nvme_attach_controller" 00:30:32.824 } 00:30:32.824 EOF 00:30:32.824 )") 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:32.824 "params": { 00:30:32.824 "name": "Nvme0", 00:30:32.824 "trtype": "tcp", 00:30:32.824 "traddr": "10.0.0.2", 00:30:32.824 "adrfam": "ipv4", 00:30:32.824 "trsvcid": "4420", 00:30:32.824 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:32.824 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:32.824 "hdgst": false, 00:30:32.824 "ddgst": false 00:30:32.824 }, 00:30:32.824 "method": "bdev_nvme_attach_controller" 00:30:32.824 },{ 00:30:32.824 "params": { 00:30:32.824 "name": "Nvme1", 00:30:32.824 "trtype": "tcp", 00:30:32.824 "traddr": "10.0.0.2", 00:30:32.824 "adrfam": "ipv4", 00:30:32.824 "trsvcid": "4420", 00:30:32.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:32.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:32.824 "hdgst": false, 00:30:32.824 "ddgst": false 00:30:32.824 }, 00:30:32.824 "method": "bdev_nvme_attach_controller" 00:30:32.824 }' 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:32.824 16:22:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:32.824 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:32.824 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:32.824 fio-3.35 00:30:32.824 Starting 2 threads 00:30:32.824 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.822 00:30:42.822 filename0: (groupid=0, jobs=1): err= 0: pid=2495944: Mon Jul 15 16:22:18 2024 00:30:42.822 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10019msec) 00:30:42.822 slat (nsec): min=5409, max=27091, avg=6296.05, stdev=1276.55 00:30:42.822 clat (usec): min=1096, max=43675, avg=21574.68, stdev=20120.47 00:30:42.822 lat (usec): min=1101, max=43702, avg=21580.97, stdev=20120.41 00:30:42.822 clat percentiles (usec): 00:30:42.822 | 1.00th=[ 1221], 5.00th=[ 1319], 10.00th=[ 1336], 20.00th=[ 1369], 00:30:42.822 | 30.00th=[ 1385], 40.00th=[ 1401], 50.00th=[41157], 60.00th=[41681], 00:30:42.822 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:42.822 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:30:42.822 | 99.99th=[43779] 00:30:42.822 bw ( KiB/s): min= 672, max= 768, per=66.03%, avg=740.80, stdev=33.28, samples=20 00:30:42.822 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:30:42.822 lat (msec) : 2=49.78%, 50=50.22% 00:30:42.822 cpu : usr=96.44%, sys=3.04%, ctx=27, majf=0, minf=79 00:30:42.822 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:42.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.822 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:42.822 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:42.822 filename1: (groupid=0, jobs=1): err= 0: pid=2495945: Mon Jul 15 16:22:18 2024 00:30:42.822 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10037msec) 00:30:42.823 slat (nsec): min=5403, max=30502, avg=6303.85, stdev=1555.24 00:30:42.823 clat (usec): min=40946, max=42368, avg=41976.41, stdev=114.49 00:30:42.823 lat (usec): min=40954, max=42399, avg=41982.71, stdev=114.55 00:30:42.823 clat percentiles (usec): 00:30:42.823 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:30:42.823 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:42.823 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:42.823 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:42.823 | 99.99th=[42206] 00:30:42.823 bw ( KiB/s): min= 352, max= 384, per=33.91%, avg=380.80, stdev= 9.85, samples=20 00:30:42.823 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:30:42.823 lat (msec) : 50=100.00% 00:30:42.823 cpu : usr=96.97%, sys=2.83%, ctx=10, majf=0, minf=157 00:30:42.823 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:42.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.823 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:42.823 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:42.823 00:30:42.823 Run status group 0 (all jobs): 00:30:42.823 READ: bw=1121KiB/s (1148kB/s), 381KiB/s-741KiB/s (390kB/s-759kB/s), io=11.0MiB (11.5MB), run=10019-10037msec 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.823 00:30:42.823 real 0m11.294s 00:30:42.823 user 0m31.751s 00:30:42.823 sys 0m0.906s 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:42.823 16:22:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:42.823 ************************************ 00:30:42.823 END TEST fio_dif_1_multi_subsystems 00:30:42.823 ************************************ 00:30:42.823 16:22:18 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:42.823 16:22:18 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:42.823 16:22:18 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:42.823 16:22:18 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:42.823 16:22:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:42.823 ************************************ 00:30:42.823 START TEST fio_dif_rand_params 00:30:42.823 ************************************ 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.823 bdev_null0 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.823 [2024-07-15 16:22:18.418971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.823 { 00:30:42.823 "params": { 00:30:42.823 "name": "Nvme$subsystem", 00:30:42.823 "trtype": "$TEST_TRANSPORT", 00:30:42.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.823 "adrfam": "ipv4", 00:30:42.823 "trsvcid": "$NVMF_PORT", 00:30:42.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.823 "hdgst": ${hdgst:-false}, 00:30:42.823 "ddgst": ${ddgst:-false} 00:30:42.823 }, 00:30:42.823 "method": "bdev_nvme_attach_controller" 00:30:42.823 } 00:30:42.823 EOF 00:30:42.823 )") 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:42.823 "params": { 00:30:42.823 "name": "Nvme0", 00:30:42.823 "trtype": "tcp", 00:30:42.823 "traddr": "10.0.0.2", 00:30:42.823 "adrfam": "ipv4", 00:30:42.823 "trsvcid": "4420", 00:30:42.823 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:42.823 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:42.823 "hdgst": false, 00:30:42.823 "ddgst": false 00:30:42.823 }, 00:30:42.823 "method": "bdev_nvme_attach_controller" 00:30:42.823 }' 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:42.823 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:42.824 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:42.824 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:42.824 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:42.824 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:42.824 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:42.824 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:42.824 16:22:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.084 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:43.084 ... 00:30:43.084 fio-3.35 00:30:43.084 Starting 3 threads 00:30:43.084 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.667 00:30:49.667 filename0: (groupid=0, jobs=1): err= 0: pid=2498355: Mon Jul 15 16:22:24 2024 00:30:49.667 read: IOPS=116, BW=14.6MiB/s (15.3MB/s)(73.5MiB/5040msec) 00:30:49.667 slat (nsec): min=5431, max=32388, avg=7870.03, stdev=1720.17 00:30:49.667 clat (usec): min=6109, max=93483, avg=25692.71, stdev=23492.68 00:30:49.667 lat (usec): min=6118, max=93491, avg=25700.58, stdev=23492.86 00:30:49.667 clat percentiles (usec): 00:30:49.667 | 1.00th=[ 6783], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[ 9634], 00:30:49.667 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11731], 60.00th=[12780], 00:30:49.667 | 70.00th=[50070], 80.00th=[51643], 90.00th=[53216], 95.00th=[54789], 00:30:49.667 | 99.00th=[92799], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:30:49.667 | 99.99th=[93848] 00:30:49.667 bw ( KiB/s): min= 5888, max=21504, per=29.64%, avg=14976.00, stdev=4851.63, samples=10 00:30:49.667 iops : min= 46, max= 168, avg=117.00, stdev=37.90, samples=10 00:30:49.667 lat (msec) : 10=24.83%, 20=43.03%, 50=3.06%, 100=29.08% 00:30:49.667 cpu : usr=97.02%, sys=2.72%, ctx=9, majf=0, minf=40 00:30:49.667 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:49.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.668 issued rwts: total=588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:49.668 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:49.668 filename0: (groupid=0, jobs=1): err= 0: pid=2498356: Mon Jul 15 16:22:24 2024 00:30:49.668 read: IOPS=144, BW=18.1MiB/s (19.0MB/s)(91.4MiB/5047msec) 00:30:49.668 slat (nsec): min=5439, max=31973, avg=8001.66, stdev=2280.09 00:30:49.668 clat (usec): min=6999, max=95273, avg=20636.30, stdev=19466.50 00:30:49.668 lat (usec): min=7007, max=95279, avg=20644.30, stdev=19466.43 00:30:49.668 clat percentiles (usec): 00:30:49.668 | 1.00th=[ 7308], 5.00th=[ 7963], 10.00th=[ 8356], 20.00th=[ 9110], 00:30:49.668 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11076], 60.00th=[11731], 00:30:49.668 | 70.00th=[13173], 80.00th=[50070], 90.00th=[52167], 95.00th=[53216], 00:30:49.668 | 99.00th=[92799], 99.50th=[93848], 99.90th=[94897], 99.95th=[94897], 00:30:49.668 | 99.99th=[94897] 00:30:49.668 bw ( KiB/s): min=13056, max=27136, per=36.94%, avg=18662.40, stdev=4865.42, samples=10 00:30:49.668 iops : min= 102, max= 212, avg=145.80, stdev=38.01, samples=10 00:30:49.668 lat (msec) : 10=29.96%, 20=47.33%, 50=1.92%, 100=20.79% 00:30:49.668 cpu : usr=96.39%, sys=3.29%, ctx=10, majf=0, minf=181 00:30:49.668 IO depths : 1=3.8%, 2=96.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:49.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.668 issued rwts: total=731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:49.668 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:49.668 filename0: (groupid=0, jobs=1): err= 0: pid=2498357: Mon Jul 15 16:22:24 2024 00:30:49.668 read: IOPS=134, BW=16.8MiB/s (17.6MB/s)(84.1MiB/5007msec) 00:30:49.668 slat (nsec): min=5429, max=33136, avg=7714.22, stdev=2177.59 00:30:49.668 clat (usec): min=7087, max=93407, avg=22304.08, stdev=19327.76 00:30:49.668 lat (usec): min=7093, max=93416, avg=22311.79, stdev=19327.96 00:30:49.668 clat percentiles (usec): 00:30:49.668 | 1.00th=[ 7504], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 9241], 00:30:49.668 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[11076], 60.00th=[12256], 00:30:49.668 | 70.00th=[14746], 80.00th=[51119], 90.00th=[52167], 95.00th=[53216], 00:30:49.668 | 99.00th=[55313], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:30:49.668 | 99.99th=[93848] 00:30:49.668 bw ( KiB/s): min= 9216, max=27904, per=33.94%, avg=17149.00, stdev=4980.22, samples=10 00:30:49.668 iops : min= 72, max= 218, avg=133.90, stdev=38.95, samples=10 00:30:49.668 lat (msec) : 10=33.14%, 20=38.93%, 50=2.97%, 100=24.96% 00:30:49.668 cpu : usr=96.54%, sys=3.18%, ctx=10, majf=0, minf=75 00:30:49.668 IO depths : 1=6.7%, 2=93.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:49.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.668 issued rwts: total=673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:49.668 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:49.668 00:30:49.668 Run status group 0 (all jobs): 00:30:49.668 READ: bw=49.3MiB/s (51.7MB/s), 14.6MiB/s-18.1MiB/s (15.3MB/s-19.0MB/s), io=249MiB (261MB), run=5007-5047msec 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.668 bdev_null0 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.668 [2024-07-15 16:22:24.536439] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:49.668 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.669 bdev_null1 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.669 bdev_null2 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:49.669 { 00:30:49.669 "params": { 00:30:49.669 "name": "Nvme$subsystem", 00:30:49.669 "trtype": "$TEST_TRANSPORT", 00:30:49.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:49.669 "adrfam": "ipv4", 00:30:49.669 "trsvcid": "$NVMF_PORT", 00:30:49.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:49.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:49.669 "hdgst": ${hdgst:-false}, 00:30:49.669 "ddgst": ${ddgst:-false} 00:30:49.669 }, 00:30:49.669 "method": "bdev_nvme_attach_controller" 00:30:49.669 } 00:30:49.669 EOF 00:30:49.669 )") 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:49.669 { 00:30:49.669 "params": { 00:30:49.669 "name": "Nvme$subsystem", 00:30:49.669 "trtype": "$TEST_TRANSPORT", 00:30:49.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:49.669 "adrfam": "ipv4", 00:30:49.669 "trsvcid": "$NVMF_PORT", 00:30:49.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:49.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:49.669 "hdgst": ${hdgst:-false}, 00:30:49.669 "ddgst": ${ddgst:-false} 00:30:49.669 }, 00:30:49.669 "method": "bdev_nvme_attach_controller" 00:30:49.669 } 00:30:49.669 EOF 00:30:49.669 )") 00:30:49.669 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:49.670 { 00:30:49.670 "params": { 00:30:49.670 "name": "Nvme$subsystem", 00:30:49.670 "trtype": "$TEST_TRANSPORT", 00:30:49.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:49.670 "adrfam": "ipv4", 00:30:49.670 "trsvcid": "$NVMF_PORT", 00:30:49.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:49.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:49.670 "hdgst": ${hdgst:-false}, 00:30:49.670 "ddgst": ${ddgst:-false} 00:30:49.670 }, 00:30:49.670 "method": "bdev_nvme_attach_controller" 00:30:49.670 } 00:30:49.670 EOF 00:30:49.670 )") 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:49.670 "params": { 00:30:49.670 "name": "Nvme0", 00:30:49.670 "trtype": "tcp", 00:30:49.670 "traddr": "10.0.0.2", 00:30:49.670 "adrfam": "ipv4", 00:30:49.670 "trsvcid": "4420", 00:30:49.670 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:49.670 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:49.670 "hdgst": false, 00:30:49.670 "ddgst": false 00:30:49.670 }, 00:30:49.670 "method": "bdev_nvme_attach_controller" 00:30:49.670 },{ 00:30:49.670 "params": { 00:30:49.670 "name": "Nvme1", 00:30:49.670 "trtype": "tcp", 00:30:49.670 "traddr": "10.0.0.2", 00:30:49.670 "adrfam": "ipv4", 00:30:49.670 "trsvcid": "4420", 00:30:49.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:49.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:49.670 "hdgst": false, 00:30:49.670 "ddgst": false 00:30:49.670 }, 00:30:49.670 "method": "bdev_nvme_attach_controller" 00:30:49.670 },{ 00:30:49.670 "params": { 00:30:49.670 "name": "Nvme2", 00:30:49.670 "trtype": "tcp", 00:30:49.670 "traddr": "10.0.0.2", 00:30:49.670 "adrfam": "ipv4", 00:30:49.670 "trsvcid": "4420", 00:30:49.670 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:49.670 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:49.670 "hdgst": false, 00:30:49.670 "ddgst": false 00:30:49.670 }, 00:30:49.670 "method": "bdev_nvme_attach_controller" 00:30:49.670 }' 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:49.670 16:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:49.670 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:49.670 ... 00:30:49.670 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:49.670 ... 00:30:49.670 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:49.670 ... 00:30:49.670 fio-3.35 00:30:49.670 Starting 24 threads 00:30:49.670 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.909 00:31:01.909 filename0: (groupid=0, jobs=1): err= 0: pid=2499735: Mon Jul 15 16:22:35 2024 00:31:01.909 read: IOPS=599, BW=2397KiB/s (2455kB/s)(23.4MiB/10012msec) 00:31:01.909 slat (nsec): min=5404, max=95879, avg=8598.32, stdev=5542.55 00:31:01.909 clat (usec): min=2132, max=34499, avg=26622.19, stdev=6279.63 00:31:01.909 lat (usec): min=2156, max=34506, avg=26630.79, stdev=6280.18 00:31:01.909 clat percentiles (usec): 00:31:01.909 | 1.00th=[ 3720], 5.00th=[18220], 10.00th=[19530], 20.00th=[21103], 00:31:01.909 | 30.00th=[22152], 40.00th=[23462], 50.00th=[30802], 60.00th=[31589], 00:31:01.909 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:01.909 | 99.00th=[33162], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:31:01.909 | 99.99th=[34341] 00:31:01.909 bw ( KiB/s): min= 1920, max= 2944, per=4.96%, avg=2370.53, stdev=403.06, samples=19 00:31:01.909 iops : min= 480, max= 736, avg=592.53, stdev=100.75, samples=19 00:31:01.909 lat (msec) : 4=1.17%, 10=0.93%, 20=12.30%, 50=85.60% 00:31:01.909 cpu : usr=98.56%, sys=1.14%, ctx=36, majf=0, minf=25 00:31:01.909 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:01.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.909 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.909 issued rwts: total=6000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.909 filename0: (groupid=0, jobs=1): err= 0: pid=2499736: Mon Jul 15 16:22:35 2024 00:31:01.909 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10008msec) 00:31:01.909 slat (nsec): min=5574, max=87430, avg=17427.35, stdev=13013.10 00:31:01.909 clat (usec): min=9936, max=56247, avg=33131.52, stdev=5359.87 00:31:01.909 lat (usec): min=9942, max=56264, avg=33148.94, stdev=5359.64 00:31:01.909 clat percentiles (usec): 00:31:01.909 | 1.00th=[16581], 5.00th=[26346], 10.00th=[30278], 20.00th=[31327], 00:31:01.909 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:01.909 | 70.00th=[32900], 80.00th=[33817], 90.00th=[41157], 95.00th=[44303], 00:31:01.909 | 99.00th=[50070], 99.50th=[52167], 99.90th=[55313], 99.95th=[56361], 00:31:01.909 | 99.99th=[56361] 00:31:01.909 bw ( KiB/s): min= 1728, max= 2048, per=4.02%, avg=1922.32, stdev=81.29, samples=19 00:31:01.909 iops : min= 432, max= 512, avg=480.58, stdev=20.32, samples=19 00:31:01.909 lat (msec) : 10=0.06%, 20=1.68%, 50=97.22%, 100=1.04% 00:31:01.909 cpu : usr=99.03%, sys=0.67%, ctx=9, majf=0, minf=21 00:31:01.909 IO depths : 1=1.7%, 2=3.8%, 4=13.1%, 8=68.1%, 16=13.3%, 32=0.0%, >=64=0.0% 00:31:01.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.909 complete : 0=0.0%, 4=91.9%, 8=4.5%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.909 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.909 filename0: (groupid=0, jobs=1): err= 0: pid=2499737: Mon Jul 15 16:22:35 2024 00:31:01.909 read: IOPS=516, BW=2067KiB/s (2116kB/s)(20.2MiB/10008msec) 00:31:01.909 slat (usec): min=5, max=106, avg=14.93, stdev=12.95 00:31:01.909 clat (usec): min=3976, max=52765, avg=30846.99, stdev=5037.32 00:31:01.909 lat (usec): min=3990, max=52774, avg=30861.92, stdev=5039.06 00:31:01.909 clat percentiles (usec): 00:31:01.909 | 1.00th=[13960], 5.00th=[20841], 10.00th=[23462], 20.00th=[30540], 00:31:01.909 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:01.909 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[36439], 00:31:01.909 | 99.00th=[46400], 99.50th=[49546], 99.90th=[52691], 99.95th=[52691], 00:31:01.909 | 99.99th=[52691] 00:31:01.909 bw ( KiB/s): min= 1916, max= 2512, per=4.33%, avg=2069.26, stdev=177.12, samples=19 00:31:01.909 iops : min= 479, max= 628, avg=517.32, stdev=44.28, samples=19 00:31:01.909 lat (msec) : 4=0.04%, 10=0.58%, 20=3.07%, 50=95.90%, 100=0.41% 00:31:01.909 cpu : usr=97.68%, sys=1.47%, ctx=39, majf=0, minf=27 00:31:01.909 IO depths : 1=4.5%, 2=9.0%, 4=19.7%, 8=58.2%, 16=8.6%, 32=0.0%, >=64=0.0% 00:31:01.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.909 complete : 0=0.0%, 4=92.9%, 8=1.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.909 issued rwts: total=5171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.909 filename0: (groupid=0, jobs=1): err= 0: pid=2499738: Mon Jul 15 16:22:35 2024 00:31:01.909 read: IOPS=495, BW=1982KiB/s (2029kB/s)(19.4MiB/10012msec) 00:31:01.909 slat (usec): min=5, max=107, avg=18.18, stdev=13.34 00:31:01.909 clat (usec): min=14325, max=69601, avg=32133.87, stdev=3499.58 00:31:01.909 lat (usec): min=14333, max=69625, avg=32152.04, stdev=3498.65 00:31:01.909 clat percentiles (usec): 00:31:01.909 | 1.00th=[22414], 5.00th=[30016], 10.00th=[30540], 20.00th=[31327], 00:31:01.909 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:01.909 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[33817], 00:31:01.909 | 99.00th=[47973], 99.50th=[50594], 99.90th=[60556], 99.95th=[69731], 00:31:01.909 | 99.99th=[69731] 00:31:01.909 bw ( KiB/s): min= 1792, max= 2056, per=4.14%, avg=1977.84, stdev=79.94, samples=19 00:31:01.909 iops : min= 448, max= 514, avg=494.42, stdev=19.95, samples=19 00:31:01.909 lat (msec) : 20=0.93%, 50=98.51%, 100=0.56% 00:31:01.909 cpu : usr=98.94%, sys=0.75%, ctx=16, majf=0, minf=24 00:31:01.909 IO depths : 1=4.8%, 2=9.7%, 4=22.2%, 8=55.1%, 16=8.2%, 32=0.0%, >=64=0.0% 00:31:01.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.909 complete : 0=0.0%, 4=93.6%, 8=1.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.909 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.909 filename0: (groupid=0, jobs=1): err= 0: pid=2499739: Mon Jul 15 16:22:35 2024 00:31:01.909 read: IOPS=496, BW=1985KiB/s (2032kB/s)(19.4MiB/10024msec) 00:31:01.909 slat (nsec): min=5563, max=85886, avg=11161.23, stdev=9021.48 00:31:01.909 clat (usec): min=13975, max=57928, avg=32153.94, stdev=5891.83 00:31:01.909 lat (usec): min=13984, max=57935, avg=32165.11, stdev=5892.93 00:31:01.909 clat percentiles (usec): 00:31:01.909 | 1.00th=[16450], 5.00th=[21627], 10.00th=[24773], 20.00th=[30540], 00:31:01.909 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:01.909 | 70.00th=[32375], 80.00th=[33162], 90.00th=[39584], 95.00th=[43254], 00:31:01.910 | 99.00th=[50594], 99.50th=[54789], 99.90th=[57934], 99.95th=[57934], 00:31:01.910 | 99.99th=[57934] 00:31:01.910 bw ( KiB/s): min= 1820, max= 2144, per=4.16%, avg=1986.55, stdev=77.60, samples=20 00:31:01.910 iops : min= 455, max= 536, avg=496.60, stdev=19.39, samples=20 00:31:01.910 lat (msec) : 20=3.18%, 50=95.36%, 100=1.47% 00:31:01.910 cpu : usr=98.85%, sys=0.81%, ctx=32, majf=0, minf=46 00:31:01.910 IO depths : 1=1.5%, 2=3.2%, 4=11.2%, 8=71.4%, 16=12.6%, 32=0.0%, >=64=0.0% 00:31:01.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.910 complete : 0=0.0%, 4=91.2%, 8=4.7%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.910 issued rwts: total=4974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.910 filename0: (groupid=0, jobs=1): err= 0: pid=2499740: Mon Jul 15 16:22:35 2024 00:31:01.910 read: IOPS=501, BW=2007KiB/s (2055kB/s)(19.7MiB/10030msec) 00:31:01.910 slat (usec): min=5, max=103, avg=13.48, stdev=11.16 00:31:01.910 clat (usec): min=12340, max=56485, avg=31786.41, stdev=4962.86 00:31:01.910 lat (usec): min=12347, max=56491, avg=31799.89, stdev=4963.23 00:31:01.910 clat percentiles (usec): 00:31:01.910 | 1.00th=[16909], 5.00th=[22676], 10.00th=[26870], 20.00th=[30802], 00:31:01.910 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:01.910 | 70.00th=[32375], 80.00th=[32900], 90.00th=[34341], 95.00th=[39584], 00:31:01.910 | 99.00th=[49021], 99.50th=[51119], 99.90th=[52691], 99.95th=[54789], 00:31:01.910 | 99.99th=[56361] 00:31:01.910 bw ( KiB/s): min= 1800, max= 2192, per=4.19%, avg=2004.11, stdev=90.72, samples=19 00:31:01.910 iops : min= 450, max= 548, avg=500.95, stdev=22.64, samples=19 00:31:01.910 lat (msec) : 20=2.96%, 50=96.30%, 100=0.74% 00:31:01.910 cpu : usr=98.81%, sys=0.89%, ctx=16, majf=0, minf=46 00:31:01.910 IO depths : 1=1.8%, 2=5.0%, 4=16.3%, 8=64.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:31:01.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.910 complete : 0=0.0%, 4=92.4%, 8=3.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.910 issued rwts: total=5032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.910 filename0: (groupid=0, jobs=1): err= 0: pid=2499742: Mon Jul 15 16:22:35 2024 00:31:01.910 read: IOPS=501, BW=2008KiB/s (2056kB/s)(19.6MiB/10010msec) 00:31:01.910 slat (nsec): min=5592, max=83848, avg=15239.70, stdev=11127.25 00:31:01.910 clat (usec): min=15970, max=50768, avg=31753.38, stdev=2080.02 00:31:01.910 lat (usec): min=15976, max=50786, avg=31768.62, stdev=2080.46 00:31:01.910 clat percentiles (usec): 00:31:01.910 | 1.00th=[22676], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:31:01.910 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:01.910 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:31:01.910 | 99.00th=[34341], 99.50th=[34341], 99.90th=[50594], 99.95th=[50594], 00:31:01.910 | 99.99th=[50594] 00:31:01.910 bw ( KiB/s): min= 1795, max= 2176, per=4.20%, avg=2006.84, stdev=96.05, samples=19 00:31:01.910 iops : min= 448, max= 544, avg=501.63, stdev=24.09, samples=19 00:31:01.910 lat (msec) : 20=0.64%, 50=99.04%, 100=0.32% 00:31:01.910 cpu : usr=99.05%, sys=0.64%, ctx=28, majf=0, minf=25 00:31:01.910 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:01.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.910 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.910 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.910 filename0: (groupid=0, jobs=1): err= 0: pid=2499743: Mon Jul 15 16:22:35 2024 00:31:01.910 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.7MiB/10004msec) 00:31:01.910 slat (nsec): min=5566, max=92110, avg=18002.43, stdev=12413.76 00:31:01.910 clat (usec): min=10582, max=52995, avg=33234.38, stdev=5342.24 00:31:01.910 lat (usec): min=10588, max=53023, avg=33252.39, stdev=5341.12 00:31:01.910 clat percentiles (usec): 00:31:01.910 | 1.00th=[19792], 5.00th=[25822], 10.00th=[30016], 20.00th=[31327], 00:31:01.910 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:01.910 | 70.00th=[32637], 80.00th=[34341], 90.00th=[42206], 95.00th=[44303], 00:31:01.910 | 99.00th=[47973], 99.50th=[51643], 99.90th=[52691], 99.95th=[52691], 00:31:01.910 | 99.99th=[53216] 00:31:01.910 bw ( KiB/s): min= 1536, max= 2064, per=4.00%, avg=1911.16, stdev=148.51, samples=19 00:31:01.910 iops : min= 384, max= 516, avg=477.79, stdev=37.13, samples=19 00:31:01.910 lat (msec) : 20=1.06%, 50=98.23%, 100=0.71% 00:31:01.910 cpu : usr=98.93%, sys=0.73%, ctx=67, majf=0, minf=29 00:31:01.910 IO depths : 1=3.0%, 2=6.0%, 4=17.0%, 8=63.2%, 16=10.9%, 32=0.0%, >=64=0.0% 00:31:01.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.910 complete : 0=0.0%, 4=92.5%, 8=3.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.910 issued rwts: total=4796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.910 filename1: (groupid=0, jobs=1): err= 0: pid=2499744: Mon Jul 15 16:22:35 2024 00:31:01.910 read: IOPS=501, BW=2007KiB/s (2055kB/s)(19.6MiB/10013msec) 00:31:01.910 slat (nsec): min=5590, max=94567, avg=16242.39, stdev=12354.94 00:31:01.910 clat (usec): min=17460, max=52763, avg=31751.49, stdev=1635.97 00:31:01.910 lat (usec): min=17468, max=52781, avg=31767.73, stdev=1636.22 00:31:01.910 clat percentiles (usec): 00:31:01.910 | 1.00th=[23462], 5.00th=[30278], 10.00th=[30540], 20.00th=[31065], 00:31:01.910 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:01.910 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:31:01.910 | 99.00th=[34341], 99.50th=[34341], 99.90th=[36963], 99.95th=[36963], 00:31:01.910 | 99.99th=[52691] 00:31:01.910 bw ( KiB/s): min= 1916, max= 2048, per=4.19%, avg=2000.26, stdev=63.12, samples=19 00:31:01.910 iops : min= 479, max= 512, avg=499.95, stdev=15.78, samples=19 00:31:01.910 lat (msec) : 20=0.36%, 50=99.60%, 100=0.04% 00:31:01.910 cpu : usr=99.23%, sys=0.44%, ctx=54, majf=0, minf=23 00:31:01.910 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:01.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.910 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.910 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.910 filename1: (groupid=0, jobs=1): err= 0: pid=2499745: Mon Jul 15 16:22:35 2024 00:31:01.910 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.5MiB/10024msec) 00:31:01.910 slat (nsec): min=5588, max=91766, avg=23694.55, stdev=15831.12 00:31:01.910 clat (usec): min=16272, max=49314, avg=31929.81, stdev=2586.13 00:31:01.910 lat (usec): min=16279, max=49320, avg=31953.50, stdev=2585.57 00:31:01.910 clat percentiles (usec): 00:31:01.910 | 1.00th=[23462], 5.00th=[30016], 10.00th=[30540], 20.00th=[31327], 00:31:01.910 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:01.910 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:31:01.910 | 99.00th=[44303], 99.50th=[45876], 99.90th=[47973], 99.95th=[48497], 00:31:01.910 | 99.99th=[49546] 00:31:01.910 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1985.42, stdev=59.32, samples=19 00:31:01.910 iops : min= 480, max= 512, avg=496.32, stdev=14.79, samples=19 00:31:01.910 lat (msec) : 20=0.36%, 50=99.64% 00:31:01.910 cpu : usr=96.82%, sys=1.67%, ctx=184, majf=0, minf=25 00:31:01.910 IO depths : 1=4.9%, 2=9.9%, 4=23.5%, 8=53.8%, 16=7.8%, 32=0.0%, >=64=0.0% 00:31:01.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.910 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.910 issued rwts: total=4988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.910 filename1: (groupid=0, jobs=1): err= 0: pid=2499746: Mon Jul 15 16:22:35 2024 00:31:01.910 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10008msec) 00:31:01.910 slat (nsec): min=5591, max=86002, avg=13646.74, stdev=9960.06 00:31:01.910 clat (usec): min=11988, max=56288, avg=31846.09, stdev=1894.52 00:31:01.910 lat (usec): min=12009, max=56307, avg=31859.74, stdev=1894.77 00:31:01.910 clat percentiles (usec): 00:31:01.910 | 1.00th=[29754], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:31:01.910 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:01.910 | 70.00th=[32113], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:31:01.910 | 99.00th=[34341], 99.50th=[34341], 99.90th=[47449], 99.95th=[47449], 00:31:01.910 | 99.99th=[56361] 00:31:01.910 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1993.84, stdev=77.51, samples=19 00:31:01.910 iops : min= 448, max= 512, avg=498.42, stdev=19.35, samples=19 00:31:01.910 lat (msec) : 20=0.64%, 50=99.32%, 100=0.04% 00:31:01.910 cpu : usr=99.12%, sys=0.61%, ctx=12, majf=0, minf=27 00:31:01.910 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:01.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.910 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.910 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.910 filename1: (groupid=0, jobs=1): err= 0: pid=2499747: Mon Jul 15 16:22:35 2024 00:31:01.910 read: IOPS=501, BW=2006KiB/s (2054kB/s)(19.6MiB/10004msec) 00:31:01.910 slat (usec): min=5, max=132, avg=21.19, stdev=15.70 00:31:01.910 clat (usec): min=13372, max=53059, avg=31715.19, stdev=3366.21 00:31:01.910 lat (usec): min=13407, max=53078, avg=31736.38, stdev=3366.50 00:31:01.910 clat percentiles (usec): 00:31:01.910 | 1.00th=[19530], 5.00th=[26084], 10.00th=[30016], 20.00th=[31065], 00:31:01.910 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:01.910 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33424], 95.00th=[34866], 00:31:01.910 | 99.00th=[42730], 99.50th=[47449], 99.90th=[53216], 99.95th=[53216], 00:31:01.910 | 99.99th=[53216] 00:31:01.910 bw ( KiB/s): min= 1792, max= 2160, per=4.19%, avg=2003.79, stdev=86.62, samples=19 00:31:01.910 iops : min= 448, max= 540, avg=500.95, stdev=21.66, samples=19 00:31:01.910 lat (msec) : 20=1.48%, 50=98.05%, 100=0.48% 00:31:01.910 cpu : usr=97.15%, sys=1.37%, ctx=93, majf=0, minf=30 00:31:01.910 IO depths : 1=4.8%, 2=9.6%, 4=20.0%, 8=57.2%, 16=8.4%, 32=0.0%, >=64=0.0% 00:31:01.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.910 complete : 0=0.0%, 4=92.8%, 8=2.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.910 issued rwts: total=5016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.910 filename1: (groupid=0, jobs=1): err= 0: pid=2499748: Mon Jul 15 16:22:35 2024 00:31:01.910 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10004msec) 00:31:01.910 slat (nsec): min=5461, max=82156, avg=15448.33, stdev=10399.94 00:31:01.910 clat (usec): min=4232, max=59620, avg=31834.25, stdev=2571.21 00:31:01.910 lat (usec): min=4238, max=59640, avg=31849.70, stdev=2571.50 00:31:01.910 clat percentiles (usec): 00:31:01.910 | 1.00th=[28443], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:31:01.910 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:01.910 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:31:01.911 | 99.00th=[34341], 99.50th=[34341], 99.90th=[59507], 99.95th=[59507], 00:31:01.911 | 99.99th=[59507] 00:31:01.911 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1986.84, stdev=77.89, samples=19 00:31:01.911 iops : min= 448, max= 512, avg=496.63, stdev=19.41, samples=19 00:31:01.911 lat (msec) : 10=0.28%, 20=0.32%, 50=99.08%, 100=0.32% 00:31:01.911 cpu : usr=98.54%, sys=0.91%, ctx=37, majf=0, minf=34 00:31:01.911 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:01.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.911 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.911 issued rwts: total=5006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.911 filename1: (groupid=0, jobs=1): err= 0: pid=2499750: Mon Jul 15 16:22:35 2024 00:31:01.911 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10005msec) 00:31:01.911 slat (nsec): min=5570, max=92962, avg=21354.17, stdev=16409.33 00:31:01.911 clat (usec): min=9476, max=52728, avg=31777.27, stdev=2200.74 00:31:01.911 lat (usec): min=9483, max=52745, avg=31798.62, stdev=2200.12 00:31:01.911 clat percentiles (usec): 00:31:01.911 | 1.00th=[28443], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:31:01.911 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:01.911 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:31:01.911 | 99.00th=[34341], 99.50th=[34866], 99.90th=[52691], 99.95th=[52691], 00:31:01.911 | 99.99th=[52691] 00:31:01.911 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1993.58, stdev=77.32, samples=19 00:31:01.911 iops : min= 448, max= 512, avg=498.32, stdev=19.28, samples=19 00:31:01.911 lat (msec) : 10=0.32%, 20=0.32%, 50=99.04%, 100=0.32% 00:31:01.911 cpu : usr=98.81%, sys=0.65%, ctx=40, majf=0, minf=30 00:31:01.911 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:01.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.911 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.911 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.911 filename1: (groupid=0, jobs=1): err= 0: pid=2499751: Mon Jul 15 16:22:35 2024 00:31:01.911 read: IOPS=499, BW=1998KiB/s (2046kB/s)(19.5MiB/10014msec) 00:31:01.911 slat (nsec): min=5579, max=83870, avg=20059.32, stdev=14589.92 00:31:01.911 clat (usec): min=18392, max=53103, avg=31848.44, stdev=2338.22 00:31:01.911 lat (usec): min=18398, max=53109, avg=31868.50, stdev=2337.81 00:31:01.911 clat percentiles (usec): 00:31:01.911 | 1.00th=[22414], 5.00th=[30016], 10.00th=[30540], 20.00th=[31327], 00:31:01.911 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:01.911 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[33817], 00:31:01.911 | 99.00th=[40109], 99.50th=[44303], 99.90th=[51119], 99.95th=[51643], 00:31:01.911 | 99.99th=[53216] 00:31:01.911 bw ( KiB/s): min= 1888, max= 2048, per=4.18%, avg=1997.58, stdev=62.50, samples=19 00:31:01.911 iops : min= 472, max= 512, avg=499.32, stdev=15.58, samples=19 00:31:01.911 lat (msec) : 20=0.18%, 50=99.62%, 100=0.20% 00:31:01.911 cpu : usr=98.81%, sys=0.81%, ctx=87, majf=0, minf=31 00:31:01.911 IO depths : 1=4.6%, 2=10.0%, 4=23.3%, 8=53.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:31:01.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.911 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.911 issued rwts: total=5002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.911 filename1: (groupid=0, jobs=1): err= 0: pid=2499752: Mon Jul 15 16:22:35 2024 00:31:01.911 read: IOPS=487, BW=1952KiB/s (1999kB/s)(19.1MiB/10001msec) 00:31:01.911 slat (nsec): min=5442, max=90714, avg=16723.52, stdev=13199.15 00:31:01.911 clat (usec): min=12348, max=61614, avg=32682.17, stdev=5372.88 00:31:01.911 lat (usec): min=12354, max=61623, avg=32698.90, stdev=5372.94 00:31:01.911 clat percentiles (usec): 00:31:01.911 | 1.00th=[17695], 5.00th=[24773], 10.00th=[29754], 20.00th=[31065], 00:31:01.911 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:01.911 | 70.00th=[32637], 80.00th=[33162], 90.00th=[38536], 95.00th=[43779], 00:31:01.911 | 99.00th=[51643], 99.50th=[53740], 99.90th=[58459], 99.95th=[58459], 00:31:01.911 | 99.99th=[61604] 00:31:01.911 bw ( KiB/s): min= 1792, max= 2075, per=4.07%, avg=1946.63, stdev=74.37, samples=19 00:31:01.911 iops : min= 448, max= 518, avg=486.58, stdev=18.58, samples=19 00:31:01.911 lat (msec) : 20=1.82%, 50=96.84%, 100=1.33% 00:31:01.911 cpu : usr=98.02%, sys=1.08%, ctx=30, majf=0, minf=26 00:31:01.911 IO depths : 1=1.2%, 2=3.7%, 4=13.2%, 8=68.4%, 16=13.5%, 32=0.0%, >=64=0.0% 00:31:01.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.911 complete : 0=0.0%, 4=92.0%, 8=3.9%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.911 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.911 filename2: (groupid=0, jobs=1): err= 0: pid=2499753: Mon Jul 15 16:22:35 2024 00:31:01.911 read: IOPS=486, BW=1946KiB/s (1993kB/s)(19.0MiB/10009msec) 00:31:01.911 slat (nsec): min=5569, max=78648, avg=14532.63, stdev=12408.91 00:31:01.911 clat (usec): min=14031, max=55411, avg=32789.52, stdev=4696.71 00:31:01.911 lat (usec): min=14044, max=55417, avg=32804.06, stdev=4697.09 00:31:01.911 clat percentiles (usec): 00:31:01.911 | 1.00th=[20579], 5.00th=[25822], 10.00th=[30540], 20.00th=[31327], 00:31:01.911 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:01.911 | 70.00th=[32637], 80.00th=[33162], 90.00th=[39584], 95.00th=[42730], 00:31:01.911 | 99.00th=[49021], 99.50th=[51119], 99.90th=[54264], 99.95th=[55313], 00:31:01.911 | 99.99th=[55313] 00:31:01.911 bw ( KiB/s): min= 1788, max= 2048, per=4.06%, avg=1940.21, stdev=86.66, samples=19 00:31:01.911 iops : min= 447, max= 512, avg=485.05, stdev=21.67, samples=19 00:31:01.911 lat (msec) : 20=0.86%, 50=98.27%, 100=0.86% 00:31:01.911 cpu : usr=99.03%, sys=0.66%, ctx=10, majf=0, minf=45 00:31:01.911 IO depths : 1=1.5%, 2=4.8%, 4=17.7%, 8=63.8%, 16=12.2%, 32=0.0%, >=64=0.0% 00:31:01.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.911 complete : 0=0.0%, 4=92.7%, 8=2.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.911 issued rwts: total=4869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.911 filename2: (groupid=0, jobs=1): err= 0: pid=2499754: Mon Jul 15 16:22:35 2024 00:31:01.911 read: IOPS=429, BW=1717KiB/s (1758kB/s)(16.8MiB/10004msec) 00:31:01.911 slat (nsec): min=5574, max=91169, avg=20424.55, stdev=15102.13 00:31:01.911 clat (usec): min=5734, max=82880, avg=37106.79, stdev=6473.31 00:31:01.911 lat (usec): min=5740, max=82903, avg=37127.21, stdev=6469.49 00:31:01.911 clat percentiles (usec): 00:31:01.911 | 1.00th=[16319], 5.00th=[31065], 10.00th=[31589], 20.00th=[31589], 00:31:01.911 | 30.00th=[31851], 40.00th=[32113], 50.00th=[36439], 60.00th=[40633], 00:31:01.911 | 70.00th=[42206], 80.00th=[43254], 90.00th=[44303], 95.00th=[45351], 00:31:01.911 | 99.00th=[49546], 99.50th=[51643], 99.90th=[59507], 99.95th=[82314], 00:31:01.911 | 99.99th=[83362] 00:31:01.911 bw ( KiB/s): min= 1440, max= 2048, per=3.60%, avg=1718.32, stdev=233.44, samples=19 00:31:01.911 iops : min= 360, max= 512, avg=429.58, stdev=58.36, samples=19 00:31:01.911 lat (msec) : 10=0.09%, 20=0.95%, 50=98.14%, 100=0.82% 00:31:01.911 cpu : usr=98.77%, sys=0.81%, ctx=158, majf=0, minf=29 00:31:01.911 IO depths : 1=2.4%, 2=5.0%, 4=20.1%, 8=62.1%, 16=10.3%, 32=0.0%, >=64=0.0% 00:31:01.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.911 complete : 0=0.0%, 4=93.8%, 8=0.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.911 issued rwts: total=4294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.911 filename2: (groupid=0, jobs=1): err= 0: pid=2499755: Mon Jul 15 16:22:35 2024 00:31:01.911 read: IOPS=501, BW=2008KiB/s (2056kB/s)(19.6MiB/10008msec) 00:31:01.911 slat (nsec): min=5579, max=86758, avg=16957.74, stdev=12659.48 00:31:01.911 clat (usec): min=18392, max=48411, avg=31725.15, stdev=1955.47 00:31:01.911 lat (usec): min=18398, max=48430, avg=31742.11, stdev=1955.86 00:31:01.911 clat percentiles (usec): 00:31:01.911 | 1.00th=[21103], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:31:01.911 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:01.911 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[33424], 00:31:01.911 | 99.00th=[34341], 99.50th=[34866], 99.90th=[48497], 99.95th=[48497], 00:31:01.911 | 99.99th=[48497] 00:31:01.911 bw ( KiB/s): min= 1920, max= 2176, per=4.19%, avg=2000.47, stdev=75.95, samples=19 00:31:01.911 iops : min= 480, max= 544, avg=500.00, stdev=18.99, samples=19 00:31:01.911 lat (msec) : 20=0.32%, 50=99.68% 00:31:01.911 cpu : usr=99.19%, sys=0.52%, ctx=12, majf=0, minf=27 00:31:01.911 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:01.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.911 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.911 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.911 filename2: (groupid=0, jobs=1): err= 0: pid=2499756: Mon Jul 15 16:22:35 2024 00:31:01.911 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10015msec) 00:31:01.911 slat (nsec): min=5126, max=95777, avg=18802.27, stdev=14650.90 00:31:01.911 clat (usec): min=15035, max=55475, avg=32201.07, stdev=4051.80 00:31:01.911 lat (usec): min=15044, max=55490, avg=32219.87, stdev=4051.10 00:31:01.911 clat percentiles (usec): 00:31:01.911 | 1.00th=[21103], 5.00th=[27395], 10.00th=[30278], 20.00th=[31065], 00:31:01.911 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:01.911 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[40109], 00:31:01.911 | 99.00th=[50070], 99.50th=[53216], 99.90th=[55313], 99.95th=[55313], 00:31:01.911 | 99.99th=[55313] 00:31:01.911 bw ( KiB/s): min= 1792, max= 2096, per=4.15%, avg=1981.16, stdev=78.44, samples=19 00:31:01.911 iops : min= 448, max= 524, avg=495.21, stdev=19.61, samples=19 00:31:01.911 lat (msec) : 20=0.67%, 50=98.32%, 100=1.01% 00:31:01.911 cpu : usr=98.87%, sys=0.77%, ctx=70, majf=0, minf=30 00:31:01.911 IO depths : 1=3.8%, 2=7.9%, 4=19.6%, 8=59.0%, 16=9.6%, 32=0.0%, >=64=0.0% 00:31:01.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.911 complete : 0=0.0%, 4=93.1%, 8=1.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.911 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.911 filename2: (groupid=0, jobs=1): err= 0: pid=2499757: Mon Jul 15 16:22:35 2024 00:31:01.911 read: IOPS=508, BW=2033KiB/s (2082kB/s)(19.9MiB/10010msec) 00:31:01.911 slat (nsec): min=5583, max=86910, avg=12975.46, stdev=10449.11 00:31:01.911 clat (usec): min=13213, max=57428, avg=31370.72, stdev=3012.43 00:31:01.911 lat (usec): min=13219, max=57450, avg=31383.69, stdev=3013.30 00:31:01.911 clat percentiles (usec): 00:31:01.911 | 1.00th=[18220], 5.00th=[23987], 10.00th=[30540], 20.00th=[31065], 00:31:01.911 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:01.911 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:31:01.911 | 99.00th=[34341], 99.50th=[34341], 99.90th=[50594], 99.95th=[51119], 00:31:01.911 | 99.99th=[57410] 00:31:01.911 bw ( KiB/s): min= 1792, max= 2304, per=4.25%, avg=2033.58, stdev=111.88, samples=19 00:31:01.912 iops : min= 448, max= 576, avg=508.32, stdev=27.92, samples=19 00:31:01.912 lat (msec) : 20=2.20%, 50=97.48%, 100=0.31% 00:31:01.912 cpu : usr=96.95%, sys=1.55%, ctx=96, majf=0, minf=33 00:31:01.912 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:01.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.912 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.912 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.912 filename2: (groupid=0, jobs=1): err= 0: pid=2499759: Mon Jul 15 16:22:35 2024 00:31:01.912 read: IOPS=502, BW=2011KiB/s (2059kB/s)(19.7MiB/10035msec) 00:31:01.912 slat (nsec): min=5569, max=93624, avg=13715.16, stdev=11269.48 00:31:01.912 clat (usec): min=3423, max=62056, avg=31709.13, stdev=6680.33 00:31:01.912 lat (usec): min=3441, max=62066, avg=31722.85, stdev=6681.04 00:31:01.912 clat percentiles (usec): 00:31:01.912 | 1.00th=[10290], 5.00th=[20841], 10.00th=[24511], 20.00th=[30278], 00:31:01.912 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:01.912 | 70.00th=[32375], 80.00th=[33162], 90.00th=[38011], 95.00th=[42730], 00:31:01.912 | 99.00th=[54264], 99.50th=[57410], 99.90th=[58459], 99.95th=[58459], 00:31:01.912 | 99.99th=[62129] 00:31:01.912 bw ( KiB/s): min= 1728, max= 2384, per=4.21%, avg=2013.40, stdev=135.24, samples=20 00:31:01.912 iops : min= 432, max= 596, avg=503.35, stdev=33.81, samples=20 00:31:01.912 lat (msec) : 4=0.14%, 10=0.77%, 20=3.29%, 50=94.21%, 100=1.59% 00:31:01.912 cpu : usr=99.01%, sys=0.58%, ctx=71, majf=0, minf=23 00:31:01.912 IO depths : 1=1.6%, 2=4.2%, 4=14.2%, 8=67.7%, 16=12.3%, 32=0.0%, >=64=0.0% 00:31:01.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.912 complete : 0=0.0%, 4=91.8%, 8=3.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.912 issued rwts: total=5044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.912 filename2: (groupid=0, jobs=1): err= 0: pid=2499760: Mon Jul 15 16:22:35 2024 00:31:01.912 read: IOPS=502, BW=2008KiB/s (2056kB/s)(19.6MiB/10007msec) 00:31:01.912 slat (nsec): min=5579, max=86825, avg=14060.14, stdev=10157.26 00:31:01.912 clat (usec): min=18686, max=54243, avg=31752.96, stdev=1979.55 00:31:01.912 lat (usec): min=18693, max=54261, avg=31767.02, stdev=1979.84 00:31:01.912 clat percentiles (usec): 00:31:01.912 | 1.00th=[21103], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:31:01.912 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:01.912 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:31:01.912 | 99.00th=[34341], 99.50th=[34341], 99.90th=[47973], 99.95th=[47973], 00:31:01.912 | 99.99th=[54264] 00:31:01.912 bw ( KiB/s): min= 1916, max= 2176, per=4.20%, avg=2007.26, stdev=85.83, samples=19 00:31:01.912 iops : min= 479, max= 544, avg=501.74, stdev=21.48, samples=19 00:31:01.912 lat (msec) : 20=0.96%, 50=99.00%, 100=0.04% 00:31:01.912 cpu : usr=98.36%, sys=0.95%, ctx=67, majf=0, minf=29 00:31:01.912 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:01.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.912 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.912 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.912 filename2: (groupid=0, jobs=1): err= 0: pid=2499761: Mon Jul 15 16:22:35 2024 00:31:01.912 read: IOPS=487, BW=1948KiB/s (1995kB/s)(19.0MiB/10006msec) 00:31:01.912 slat (nsec): min=5484, max=87957, avg=16018.49, stdev=11856.02 00:31:01.912 clat (usec): min=9656, max=56109, avg=32752.07, stdev=4650.23 00:31:01.912 lat (usec): min=9663, max=56131, avg=32768.09, stdev=4649.81 00:31:01.912 clat percentiles (usec): 00:31:01.912 | 1.00th=[20317], 5.00th=[25560], 10.00th=[30278], 20.00th=[31327], 00:31:01.912 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:01.912 | 70.00th=[32637], 80.00th=[33424], 90.00th=[39584], 95.00th=[42206], 00:31:01.912 | 99.00th=[49021], 99.50th=[49546], 99.90th=[54789], 99.95th=[55837], 00:31:01.912 | 99.99th=[56361] 00:31:01.912 bw ( KiB/s): min= 1792, max= 2048, per=4.07%, avg=1944.00, stdev=70.50, samples=19 00:31:01.912 iops : min= 448, max= 512, avg=486.00, stdev=17.63, samples=19 00:31:01.912 lat (msec) : 10=0.02%, 20=0.94%, 50=98.60%, 100=0.43% 00:31:01.912 cpu : usr=98.52%, sys=0.97%, ctx=69, majf=0, minf=28 00:31:01.912 IO depths : 1=0.6%, 2=3.2%, 4=15.1%, 8=67.3%, 16=13.7%, 32=0.0%, >=64=0.0% 00:31:01.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.912 complete : 0=0.0%, 4=92.2%, 8=4.0%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.912 issued rwts: total=4873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:01.912 00:31:01.912 Run status group 0 (all jobs): 00:31:01.912 READ: bw=46.7MiB/s (48.9MB/s), 1717KiB/s-2397KiB/s (1758kB/s-2455kB/s), io=468MiB (491MB), run=10001-10035msec 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.912 bdev_null0 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.912 [2024-07-15 16:22:36.181740] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.912 bdev_null1 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.912 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:01.913 { 00:31:01.913 "params": { 00:31:01.913 "name": "Nvme$subsystem", 00:31:01.913 "trtype": "$TEST_TRANSPORT", 00:31:01.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:01.913 "adrfam": "ipv4", 00:31:01.913 "trsvcid": "$NVMF_PORT", 00:31:01.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:01.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:01.913 "hdgst": ${hdgst:-false}, 00:31:01.913 "ddgst": ${ddgst:-false} 00:31:01.913 }, 00:31:01.913 "method": "bdev_nvme_attach_controller" 00:31:01.913 } 00:31:01.913 EOF 00:31:01.913 )") 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:01.913 { 00:31:01.913 "params": { 00:31:01.913 "name": "Nvme$subsystem", 00:31:01.913 "trtype": "$TEST_TRANSPORT", 00:31:01.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:01.913 "adrfam": "ipv4", 00:31:01.913 "trsvcid": "$NVMF_PORT", 00:31:01.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:01.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:01.913 "hdgst": ${hdgst:-false}, 00:31:01.913 "ddgst": ${ddgst:-false} 00:31:01.913 }, 00:31:01.913 "method": "bdev_nvme_attach_controller" 00:31:01.913 } 00:31:01.913 EOF 00:31:01.913 )") 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:01.913 "params": { 00:31:01.913 "name": "Nvme0", 00:31:01.913 "trtype": "tcp", 00:31:01.913 "traddr": "10.0.0.2", 00:31:01.913 "adrfam": "ipv4", 00:31:01.913 "trsvcid": "4420", 00:31:01.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.913 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:01.913 "hdgst": false, 00:31:01.913 "ddgst": false 00:31:01.913 }, 00:31:01.913 "method": "bdev_nvme_attach_controller" 00:31:01.913 },{ 00:31:01.913 "params": { 00:31:01.913 "name": "Nvme1", 00:31:01.913 "trtype": "tcp", 00:31:01.913 "traddr": "10.0.0.2", 00:31:01.913 "adrfam": "ipv4", 00:31:01.913 "trsvcid": "4420", 00:31:01.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:01.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:01.913 "hdgst": false, 00:31:01.913 "ddgst": false 00:31:01.913 }, 00:31:01.913 "method": "bdev_nvme_attach_controller" 00:31:01.913 }' 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:01.913 16:22:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.913 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:01.913 ... 00:31:01.913 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:01.913 ... 00:31:01.913 fio-3.35 00:31:01.913 Starting 4 threads 00:31:01.913 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.196 00:31:07.196 filename0: (groupid=0, jobs=1): err= 0: pid=2502065: Mon Jul 15 16:22:42 2024 00:31:07.196 read: IOPS=2070, BW=16.2MiB/s (17.0MB/s)(80.9MiB/5002msec) 00:31:07.196 slat (nsec): min=5407, max=66866, avg=7771.87, stdev=2870.35 00:31:07.196 clat (usec): min=1887, max=45884, avg=3841.77, stdev=1331.40 00:31:07.196 lat (usec): min=1896, max=45919, avg=3849.54, stdev=1331.56 00:31:07.196 clat percentiles (usec): 00:31:07.196 | 1.00th=[ 2507], 5.00th=[ 2868], 10.00th=[ 3064], 20.00th=[ 3261], 00:31:07.196 | 30.00th=[ 3458], 40.00th=[ 3621], 50.00th=[ 3752], 60.00th=[ 3851], 00:31:07.196 | 70.00th=[ 4113], 80.00th=[ 4359], 90.00th=[ 4686], 95.00th=[ 4948], 00:31:07.196 | 99.00th=[ 5538], 99.50th=[ 5800], 99.90th=[ 7046], 99.95th=[45876], 00:31:07.196 | 99.99th=[45876] 00:31:07.196 bw ( KiB/s): min=14976, max=16912, per=24.87%, avg=16549.33, stdev=602.61, samples=9 00:31:07.196 iops : min= 1872, max= 2114, avg=2068.67, stdev=75.33, samples=9 00:31:07.196 lat (msec) : 2=0.06%, 4=65.21%, 10=34.66%, 50=0.08% 00:31:07.196 cpu : usr=96.68%, sys=3.06%, ctx=7, majf=0, minf=98 00:31:07.196 IO depths : 1=0.5%, 2=1.6%, 4=70.4%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:07.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.196 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.196 issued rwts: total=10356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.196 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:07.196 filename0: (groupid=0, jobs=1): err= 0: pid=2502066: Mon Jul 15 16:22:42 2024 00:31:07.196 read: IOPS=2068, BW=16.2MiB/s (16.9MB/s)(80.8MiB/5002msec) 00:31:07.196 slat (nsec): min=5403, max=47246, avg=6526.20, stdev=2406.42 00:31:07.196 clat (usec): min=1560, max=6958, avg=3848.85, stdev=694.66 00:31:07.196 lat (usec): min=1565, max=6965, avg=3855.38, stdev=694.63 00:31:07.196 clat percentiles (usec): 00:31:07.196 | 1.00th=[ 2343], 5.00th=[ 2835], 10.00th=[ 3032], 20.00th=[ 3294], 00:31:07.196 | 30.00th=[ 3458], 40.00th=[ 3654], 50.00th=[ 3785], 60.00th=[ 3916], 00:31:07.196 | 70.00th=[ 4146], 80.00th=[ 4424], 90.00th=[ 4817], 95.00th=[ 5080], 00:31:07.196 | 99.00th=[ 5800], 99.50th=[ 5866], 99.90th=[ 6194], 99.95th=[ 6587], 00:31:07.196 | 99.99th=[ 6980] 00:31:07.196 bw ( KiB/s): min=16384, max=16704, per=24.87%, avg=16552.89, stdev=89.48, samples=9 00:31:07.196 iops : min= 2048, max= 2088, avg=2069.11, stdev=11.19, samples=9 00:31:07.196 lat (msec) : 2=0.21%, 4=62.87%, 10=36.92% 00:31:07.196 cpu : usr=97.20%, sys=2.54%, ctx=20, majf=0, minf=108 00:31:07.196 IO depths : 1=0.2%, 2=1.4%, 4=69.9%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:07.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.196 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.196 issued rwts: total=10348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.196 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:07.196 filename1: (groupid=0, jobs=1): err= 0: pid=2502067: Mon Jul 15 16:22:42 2024 00:31:07.196 read: IOPS=2115, BW=16.5MiB/s (17.3MB/s)(82.7MiB/5002msec) 00:31:07.196 slat (usec): min=5, max=100, avg= 8.60, stdev= 3.41 00:31:07.196 clat (usec): min=1870, max=6872, avg=3757.95, stdev=638.77 00:31:07.196 lat (usec): min=1879, max=6902, avg=3766.55, stdev=638.76 00:31:07.196 clat percentiles (usec): 00:31:07.196 | 1.00th=[ 2474], 5.00th=[ 2802], 10.00th=[ 2999], 20.00th=[ 3228], 00:31:07.196 | 30.00th=[ 3425], 40.00th=[ 3556], 50.00th=[ 3720], 60.00th=[ 3785], 00:31:07.196 | 70.00th=[ 4015], 80.00th=[ 4228], 90.00th=[ 4621], 95.00th=[ 4948], 00:31:07.196 | 99.00th=[ 5473], 99.50th=[ 5669], 99.90th=[ 6194], 99.95th=[ 6587], 00:31:07.196 | 99.99th=[ 6849] 00:31:07.196 bw ( KiB/s): min=16688, max=17488, per=25.50%, avg=16967.33, stdev=310.26, samples=9 00:31:07.196 iops : min= 2086, max= 2186, avg=2120.89, stdev=38.73, samples=9 00:31:07.196 lat (msec) : 2=0.08%, 4=68.92%, 10=31.00% 00:31:07.196 cpu : usr=97.28%, sys=2.46%, ctx=14, majf=0, minf=78 00:31:07.196 IO depths : 1=0.2%, 2=1.3%, 4=69.7%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:07.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.196 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.196 issued rwts: total=10583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.196 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:07.196 filename1: (groupid=0, jobs=1): err= 0: pid=2502068: Mon Jul 15 16:22:42 2024 00:31:07.196 read: IOPS=2063, BW=16.1MiB/s (16.9MB/s)(80.6MiB/5002msec) 00:31:07.196 slat (nsec): min=5408, max=62859, avg=8632.79, stdev=3177.58 00:31:07.196 clat (usec): min=1524, max=6840, avg=3852.42, stdev=674.59 00:31:07.196 lat (usec): min=1547, max=6848, avg=3861.05, stdev=674.56 00:31:07.196 clat percentiles (usec): 00:31:07.196 | 1.00th=[ 2474], 5.00th=[ 2835], 10.00th=[ 3064], 20.00th=[ 3294], 00:31:07.196 | 30.00th=[ 3490], 40.00th=[ 3654], 50.00th=[ 3752], 60.00th=[ 3884], 00:31:07.196 | 70.00th=[ 4146], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5080], 00:31:07.196 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 6259], 99.95th=[ 6325], 00:31:07.196 | 99.99th=[ 6456] 00:31:07.196 bw ( KiB/s): min=16272, max=16848, per=24.78%, avg=16494.22, stdev=173.70, samples=9 00:31:07.196 iops : min= 2034, max= 2106, avg=2061.78, stdev=21.71, samples=9 00:31:07.196 lat (msec) : 2=0.11%, 4=63.37%, 10=36.52% 00:31:07.196 cpu : usr=97.22%, sys=2.48%, ctx=9, majf=0, minf=89 00:31:07.197 IO depths : 1=0.3%, 2=1.9%, 4=68.9%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:07.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.197 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.197 issued rwts: total=10323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.197 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:07.197 00:31:07.197 Run status group 0 (all jobs): 00:31:07.197 READ: bw=65.0MiB/s (68.1MB/s), 16.1MiB/s-16.5MiB/s (16.9MB/s-17.3MB/s), io=325MiB (341MB), run=5002-5002msec 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.197 00:31:07.197 real 0m24.087s 00:31:07.197 user 5m17.631s 00:31:07.197 sys 0m4.181s 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:07.197 16:22:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.197 ************************************ 00:31:07.197 END TEST fio_dif_rand_params 00:31:07.197 ************************************ 00:31:07.197 16:22:42 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:07.197 16:22:42 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:07.197 16:22:42 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:07.197 16:22:42 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:07.197 16:22:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:07.197 ************************************ 00:31:07.197 START TEST fio_dif_digest 00:31:07.197 ************************************ 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:07.197 bdev_null0 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:07.197 [2024-07-15 16:22:42.584988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.197 16:22:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:07.197 { 00:31:07.197 "params": { 00:31:07.197 "name": "Nvme$subsystem", 00:31:07.197 "trtype": "$TEST_TRANSPORT", 00:31:07.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:07.197 "adrfam": "ipv4", 00:31:07.197 "trsvcid": "$NVMF_PORT", 00:31:07.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:07.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:07.197 "hdgst": ${hdgst:-false}, 00:31:07.197 "ddgst": ${ddgst:-false} 00:31:07.198 }, 00:31:07.198 "method": "bdev_nvme_attach_controller" 00:31:07.198 } 00:31:07.198 EOF 00:31:07.198 )") 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:07.198 "params": { 00:31:07.198 "name": "Nvme0", 00:31:07.198 "trtype": "tcp", 00:31:07.198 "traddr": "10.0.0.2", 00:31:07.198 "adrfam": "ipv4", 00:31:07.198 "trsvcid": "4420", 00:31:07.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:07.198 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:07.198 "hdgst": true, 00:31:07.198 "ddgst": true 00:31:07.198 }, 00:31:07.198 "method": "bdev_nvme_attach_controller" 00:31:07.198 }' 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:07.198 16:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.198 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:07.198 ... 00:31:07.198 fio-3.35 00:31:07.198 Starting 3 threads 00:31:07.460 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.754 00:31:19.754 filename0: (groupid=0, jobs=1): err= 0: pid=2503537: Mon Jul 15 16:22:53 2024 00:31:19.754 read: IOPS=126, BW=15.8MiB/s (16.6MB/s)(159MiB/10032msec) 00:31:19.754 slat (nsec): min=5869, max=36025, avg=7915.42, stdev=1763.76 00:31:19.754 clat (msec): min=7, max=133, avg=23.70, stdev=20.90 00:31:19.754 lat (msec): min=7, max=133, avg=23.71, stdev=20.90 00:31:19.754 clat percentiles (msec): 00:31:19.754 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:31:19.754 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 15], 00:31:19.754 | 70.00th=[ 16], 80.00th=[ 52], 90.00th=[ 54], 95.00th=[ 55], 00:31:19.754 | 99.00th=[ 94], 99.50th=[ 95], 99.90th=[ 96], 99.95th=[ 134], 00:31:19.754 | 99.99th=[ 134] 00:31:19.754 bw ( KiB/s): min=10752, max=23040, per=34.93%, avg=16204.80, stdev=3287.96, samples=20 00:31:19.754 iops : min= 84, max= 180, avg=126.60, stdev=25.69, samples=20 00:31:19.754 lat (msec) : 10=13.95%, 20=60.52%, 50=0.32%, 100=25.14%, 250=0.08% 00:31:19.754 cpu : usr=96.71%, sys=3.06%, ctx=15, majf=0, minf=115 00:31:19.754 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:19.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.754 issued rwts: total=1269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.754 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:19.754 filename0: (groupid=0, jobs=1): err= 0: pid=2503538: Mon Jul 15 16:22:53 2024 00:31:19.754 read: IOPS=120, BW=15.0MiB/s (15.7MB/s)(150MiB/10005msec) 00:31:19.754 slat (nsec): min=5698, max=36605, avg=6472.53, stdev=1283.12 00:31:19.754 clat (usec): min=6576, max=95614, avg=24978.17, stdev=20107.42 00:31:19.754 lat (usec): min=6582, max=95620, avg=24984.64, stdev=20107.40 00:31:19.754 clat percentiles (usec): 00:31:19.754 | 1.00th=[ 8094], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11731], 00:31:19.754 | 30.00th=[12256], 40.00th=[13042], 50.00th=[13960], 60.00th=[14877], 00:31:19.754 | 70.00th=[16581], 80.00th=[52167], 90.00th=[53740], 95.00th=[54264], 00:31:19.754 | 99.00th=[92799], 99.50th=[93848], 99.90th=[93848], 99.95th=[95945], 00:31:19.754 | 99.99th=[95945] 00:31:19.754 bw ( KiB/s): min= 9216, max=20736, per=32.91%, avg=15265.68, stdev=3069.88, samples=19 00:31:19.754 iops : min= 72, max= 162, avg=119.26, stdev=23.98, samples=19 00:31:19.754 lat (msec) : 10=6.66%, 20=64.70%, 50=0.92%, 100=27.73% 00:31:19.754 cpu : usr=96.85%, sys=2.94%, ctx=22, majf=0, minf=179 00:31:19.754 IO depths : 1=6.4%, 2=93.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:19.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.754 issued rwts: total=1201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.754 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:19.754 filename0: (groupid=0, jobs=1): err= 0: pid=2503539: Mon Jul 15 16:22:53 2024 00:31:19.754 read: IOPS=116, BW=14.6MiB/s (15.3MB/s)(146MiB/10041msec) 00:31:19.754 slat (nsec): min=5675, max=38587, avg=6513.00, stdev=1456.74 00:31:19.754 clat (usec): min=7126, max=97503, avg=25755.61, stdev=21476.88 00:31:19.754 lat (usec): min=7132, max=97509, avg=25762.12, stdev=21476.90 00:31:19.754 clat percentiles (usec): 00:31:19.754 | 1.00th=[ 8160], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[11076], 00:31:19.754 | 30.00th=[11863], 40.00th=[13042], 50.00th=[14222], 60.00th=[15270], 00:31:19.754 | 70.00th=[18220], 80.00th=[52167], 90.00th=[53740], 95.00th=[55313], 00:31:19.754 | 99.00th=[94897], 99.50th=[95945], 99.90th=[96994], 99.95th=[96994], 00:31:19.754 | 99.99th=[96994] 00:31:19.754 bw ( KiB/s): min=10240, max=19456, per=32.17%, avg=14924.80, stdev=3111.16, samples=20 00:31:19.754 iops : min= 80, max= 152, avg=116.60, stdev=24.31, samples=20 00:31:19.754 lat (msec) : 10=11.38%, 20=58.85%, 50=0.43%, 100=29.34% 00:31:19.754 cpu : usr=96.63%, sys=3.14%, ctx=26, majf=0, minf=147 00:31:19.754 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:19.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.754 issued rwts: total=1169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.754 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:19.754 00:31:19.754 Run status group 0 (all jobs): 00:31:19.754 READ: bw=45.3MiB/s (47.5MB/s), 14.6MiB/s-15.8MiB/s (15.3MB/s-16.6MB/s), io=455MiB (477MB), run=10005-10041msec 00:31:19.754 16:22:53 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:19.754 16:22:53 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:19.754 16:22:53 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:19.754 16:22:53 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:19.754 16:22:53 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:19.754 16:22:53 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:19.754 16:22:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.755 16:22:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:19.755 16:22:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.755 16:22:53 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:19.755 16:22:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.755 16:22:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:19.755 16:22:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.755 00:31:19.755 real 0m11.304s 00:31:19.755 user 0m44.896s 00:31:19.755 sys 0m1.237s 00:31:19.755 16:22:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:19.755 16:22:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:19.755 ************************************ 00:31:19.755 END TEST fio_dif_digest 00:31:19.755 ************************************ 00:31:19.755 16:22:53 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:19.755 16:22:53 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:19.755 16:22:53 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:19.755 16:22:53 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:19.755 16:22:53 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:19.755 16:22:53 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:19.755 16:22:53 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:19.755 16:22:53 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:19.755 16:22:53 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:19.755 rmmod nvme_tcp 00:31:19.755 rmmod nvme_fabrics 00:31:19.755 rmmod nvme_keyring 00:31:19.755 16:22:53 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:19.755 16:22:53 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:19.755 16:22:53 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:19.755 16:22:53 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2493112 ']' 00:31:19.755 16:22:53 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2493112 00:31:19.755 16:22:53 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2493112 ']' 00:31:19.755 16:22:53 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2493112 00:31:19.755 16:22:53 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:31:19.755 16:22:53 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:19.755 16:22:53 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2493112 00:31:19.755 16:22:54 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:19.755 16:22:54 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:19.755 16:22:54 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2493112' 00:31:19.755 killing process with pid 2493112 00:31:19.755 16:22:54 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2493112 00:31:19.755 16:22:54 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2493112 00:31:19.755 16:22:54 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:19.755 16:22:54 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:21.669 Waiting for block devices as requested 00:31:21.669 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:21.931 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:21.931 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:21.931 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:21.931 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:22.191 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:22.191 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:22.191 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:22.451 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:22.451 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:22.712 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:22.712 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:22.712 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:22.712 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:22.971 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:22.971 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:22.971 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:23.231 16:22:59 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:23.232 16:22:59 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:23.232 16:22:59 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:23.232 16:22:59 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:23.232 16:22:59 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.232 16:22:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:23.232 16:22:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.774 16:23:01 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:25.774 00:31:25.774 real 1m17.250s 00:31:25.774 user 8m1.139s 00:31:25.774 sys 0m19.660s 00:31:25.774 16:23:01 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:25.774 16:23:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:25.774 ************************************ 00:31:25.774 END TEST nvmf_dif 00:31:25.774 ************************************ 00:31:25.774 16:23:01 -- common/autotest_common.sh@1142 -- # return 0 00:31:25.774 16:23:01 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:25.774 16:23:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:25.774 16:23:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:25.774 16:23:01 -- common/autotest_common.sh@10 -- # set +x 00:31:25.774 ************************************ 00:31:25.774 START TEST nvmf_abort_qd_sizes 00:31:25.774 ************************************ 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:25.774 * Looking for test storage... 00:31:25.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:25.774 16:23:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:32.365 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:32.365 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:32.365 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:32.365 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:32.365 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:32.365 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:32.365 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:32.365 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:32.365 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:32.365 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:32.365 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:32.365 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:32.365 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:32.365 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:32.365 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:32.365 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.365 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:32.366 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:32.366 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:32.366 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:32.366 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:32.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:31:32.366 00:31:32.366 --- 10.0.0.2 ping statistics --- 00:31:32.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.366 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.370 ms 00:31:32.366 00:31:32.366 --- 10.0.0.1 ping statistics --- 00:31:32.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.366 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:32.366 16:23:07 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:35.669 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:35.669 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:35.669 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:35.669 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:35.669 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:35.669 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:35.669 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:35.669 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:35.669 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:35.669 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:35.669 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:35.669 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:35.669 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:35.669 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:35.669 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:35.669 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:35.669 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2512765 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2512765 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2512765 ']' 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:36.241 16:23:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:36.241 [2024-07-15 16:23:11.925083] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:31:36.241 [2024-07-15 16:23:11.925152] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.241 EAL: No free 2048 kB hugepages reported on node 1 00:31:36.241 [2024-07-15 16:23:11.994540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:36.241 [2024-07-15 16:23:12.071374] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.241 [2024-07-15 16:23:12.071412] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.241 [2024-07-15 16:23:12.071420] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.241 [2024-07-15 16:23:12.071426] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.241 [2024-07-15 16:23:12.071432] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.241 [2024-07-15 16:23:12.071569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.241 [2024-07-15 16:23:12.071705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:36.241 [2024-07-15 16:23:12.071864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.241 [2024-07-15 16:23:12.071865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:37.183 16:23:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:37.183 ************************************ 00:31:37.183 START TEST spdk_target_abort 00:31:37.183 ************************************ 00:31:37.183 16:23:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:31:37.183 16:23:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:37.183 16:23:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:37.183 16:23:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.183 16:23:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.443 spdk_targetn1 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.443 [2024-07-15 16:23:13.109128] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.443 [2024-07-15 16:23:13.149368] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.443 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:37.444 16:23:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:37.444 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.704 [2024-07-15 16:23:13.396976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:32 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:31:37.704 [2024-07-15 16:23:13.397000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:31:37.705 [2024-07-15 16:23:13.420547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:624 len:8 PRP1 0x2000078be000 PRP2 0x0 00:31:37.705 [2024-07-15 16:23:13.420565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:004f p:1 m:0 dnr:0 00:31:37.705 [2024-07-15 16:23:13.493003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2712 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:31:37.705 [2024-07-15 16:23:13.493021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:41.046 Initializing NVMe Controllers 00:31:41.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:41.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:41.046 Initialization complete. Launching workers. 00:31:41.046 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9885, failed: 3 00:31:41.046 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2743, failed to submit 7145 00:31:41.046 success 767, unsuccess 1976, failed 0 00:31:41.046 16:23:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:41.046 16:23:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:41.046 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.046 [2024-07-15 16:23:16.819353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:2632 len:8 PRP1 0x200007c46000 PRP2 0x0 00:31:41.046 [2024-07-15 16:23:16.819392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:42.453 [2024-07-15 16:23:18.229275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:35448 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:31:42.453 [2024-07-15 16:23:18.229312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:44.365 Initializing NVMe Controllers 00:31:44.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:44.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:44.365 Initialization complete. Launching workers. 00:31:44.365 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8638, failed: 2 00:31:44.365 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1226, failed to submit 7414 00:31:44.365 success 344, unsuccess 882, failed 0 00:31:44.365 16:23:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:44.365 16:23:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:44.365 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.274 [2024-07-15 16:23:21.734393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:149 nsid:1 lba:194856 len:8 PRP1 0x2000078ec000 PRP2 0x0 00:31:46.274 [2024-07-15 16:23:21.734430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:149 cdw0:0 sqhd:00cf p:0 m:0 dnr:0 00:31:46.274 [2024-07-15 16:23:22.076398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:147 nsid:1 lba:233096 len:8 PRP1 0x200007916000 PRP2 0x0 00:31:46.274 [2024-07-15 16:23:22.076419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:147 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:31:47.215 Initializing NVMe Controllers 00:31:47.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:47.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:47.215 Initialization complete. Launching workers. 00:31:47.215 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42090, failed: 2 00:31:47.215 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2600, failed to submit 39492 00:31:47.215 success 617, unsuccess 1983, failed 0 00:31:47.215 16:23:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:47.215 16:23:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.215 16:23:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:47.215 16:23:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.215 16:23:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:47.215 16:23:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.215 16:23:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:49.126 16:23:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.126 16:23:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2512765 00:31:49.126 16:23:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2512765 ']' 00:31:49.126 16:23:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2512765 00:31:49.126 16:23:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:31:49.126 16:23:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:49.126 16:23:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2512765 00:31:49.126 16:23:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:49.126 16:23:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:49.126 16:23:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2512765' 00:31:49.126 killing process with pid 2512765 00:31:49.126 16:23:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2512765 00:31:49.126 16:23:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2512765 00:31:49.387 00:31:49.387 real 0m12.280s 00:31:49.387 user 0m49.803s 00:31:49.387 sys 0m1.906s 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:49.387 ************************************ 00:31:49.387 END TEST spdk_target_abort 00:31:49.387 ************************************ 00:31:49.387 16:23:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:31:49.387 16:23:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:49.387 16:23:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:49.387 16:23:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:49.387 16:23:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:49.387 ************************************ 00:31:49.387 START TEST kernel_target_abort 00:31:49.387 ************************************ 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:49.387 16:23:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:52.690 Waiting for block devices as requested 00:31:52.690 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:52.951 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:52.951 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:52.951 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:53.212 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:53.212 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:53.212 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:53.212 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:53.473 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:53.473 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:53.735 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:53.735 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:53.735 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:53.735 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:53.996 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:53.996 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:53.996 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:54.256 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:54.256 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:54.256 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:54.256 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:54.256 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:54.256 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:54.256 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:54.256 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:54.257 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:54.518 No valid GPT data, bailing 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:31:54.518 00:31:54.518 Discovery Log Number of Records 2, Generation counter 2 00:31:54.518 =====Discovery Log Entry 0====== 00:31:54.518 trtype: tcp 00:31:54.518 adrfam: ipv4 00:31:54.518 subtype: current discovery subsystem 00:31:54.518 treq: not specified, sq flow control disable supported 00:31:54.518 portid: 1 00:31:54.518 trsvcid: 4420 00:31:54.518 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:54.518 traddr: 10.0.0.1 00:31:54.518 eflags: none 00:31:54.518 sectype: none 00:31:54.518 =====Discovery Log Entry 1====== 00:31:54.518 trtype: tcp 00:31:54.518 adrfam: ipv4 00:31:54.518 subtype: nvme subsystem 00:31:54.518 treq: not specified, sq flow control disable supported 00:31:54.518 portid: 1 00:31:54.518 trsvcid: 4420 00:31:54.518 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:54.518 traddr: 10.0.0.1 00:31:54.518 eflags: none 00:31:54.518 sectype: none 00:31:54.518 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:54.519 16:23:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:54.519 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.821 Initializing NVMe Controllers 00:31:57.821 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:57.821 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:57.821 Initialization complete. Launching workers. 00:31:57.821 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 47953, failed: 0 00:31:57.821 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 47953, failed to submit 0 00:31:57.821 success 0, unsuccess 47953, failed 0 00:31:57.821 16:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:57.821 16:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:57.821 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.121 Initializing NVMe Controllers 00:32:01.121 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:01.121 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:01.121 Initialization complete. Launching workers. 00:32:01.121 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 88203, failed: 0 00:32:01.121 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22226, failed to submit 65977 00:32:01.121 success 0, unsuccess 22226, failed 0 00:32:01.121 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:01.121 16:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:01.121 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.706 Initializing NVMe Controllers 00:32:03.706 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:03.706 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:03.706 Initialization complete. Launching workers. 00:32:03.706 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85196, failed: 0 00:32:03.706 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21298, failed to submit 63898 00:32:03.706 success 0, unsuccess 21298, failed 0 00:32:03.706 16:23:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:03.706 16:23:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:03.706 16:23:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:03.706 16:23:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:03.706 16:23:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:03.706 16:23:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:03.706 16:23:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:03.968 16:23:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:03.968 16:23:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:03.968 16:23:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:07.297 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:07.297 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:07.297 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:07.297 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:07.297 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:07.297 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:07.297 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:07.297 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:07.297 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:07.297 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:07.297 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:07.297 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:07.297 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:07.297 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:07.297 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:07.297 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:09.222 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:09.484 00:32:09.484 real 0m19.989s 00:32:09.484 user 0m8.152s 00:32:09.484 sys 0m6.305s 00:32:09.484 16:23:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:09.484 16:23:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:09.484 ************************************ 00:32:09.484 END TEST kernel_target_abort 00:32:09.484 ************************************ 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:09.484 rmmod nvme_tcp 00:32:09.484 rmmod nvme_fabrics 00:32:09.484 rmmod nvme_keyring 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2512765 ']' 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2512765 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2512765 ']' 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2512765 00:32:09.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2512765) - No such process 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2512765 is not found' 00:32:09.484 Process with pid 2512765 is not found 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:09.484 16:23:45 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:12.787 Waiting for block devices as requested 00:32:12.787 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:12.787 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:12.787 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:13.048 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:13.048 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:13.048 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:13.309 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:13.309 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:13.309 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:13.570 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:13.570 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:13.831 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:13.831 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:13.831 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:13.831 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:14.091 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:14.091 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:14.352 16:23:50 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:14.352 16:23:50 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:14.352 16:23:50 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:14.352 16:23:50 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:14.352 16:23:50 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.352 16:23:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:14.352 16:23:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.895 16:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:16.895 00:32:16.895 real 0m50.954s 00:32:16.895 user 1m2.965s 00:32:16.895 sys 0m18.482s 00:32:16.895 16:23:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:16.895 16:23:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:16.895 ************************************ 00:32:16.895 END TEST nvmf_abort_qd_sizes 00:32:16.895 ************************************ 00:32:16.895 16:23:52 -- common/autotest_common.sh@1142 -- # return 0 00:32:16.895 16:23:52 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:16.895 16:23:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:16.895 16:23:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:16.895 16:23:52 -- common/autotest_common.sh@10 -- # set +x 00:32:16.895 ************************************ 00:32:16.895 START TEST keyring_file 00:32:16.895 ************************************ 00:32:16.895 16:23:52 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:16.895 * Looking for test storage... 00:32:16.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:16.895 16:23:52 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:16.895 16:23:52 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:16.895 16:23:52 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:16.895 16:23:52 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:16.895 16:23:52 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.895 16:23:52 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.895 16:23:52 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.895 16:23:52 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:16.895 16:23:52 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:16.895 16:23:52 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:16.895 16:23:52 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:16.895 16:23:52 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:16.895 16:23:52 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:16.895 16:23:52 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:16.895 16:23:52 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0JAOpvIaaF 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0JAOpvIaaF 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0JAOpvIaaF 00:32:16.895 16:23:52 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.0JAOpvIaaF 00:32:16.895 16:23:52 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wTesNlfxQm 00:32:16.895 16:23:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:16.895 16:23:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:16.896 16:23:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:16.896 16:23:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:16.896 16:23:52 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:16.896 16:23:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:16.896 16:23:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:16.896 16:23:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wTesNlfxQm 00:32:16.896 16:23:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wTesNlfxQm 00:32:16.896 16:23:52 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.wTesNlfxQm 00:32:16.896 16:23:52 keyring_file -- keyring/file.sh@30 -- # tgtpid=2523056 00:32:16.896 16:23:52 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2523056 00:32:16.896 16:23:52 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:16.896 16:23:52 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2523056 ']' 00:32:16.896 16:23:52 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.896 16:23:52 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:16.896 16:23:52 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.896 16:23:52 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:16.896 16:23:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:16.896 [2024-07-15 16:23:52.510877] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:32:16.896 [2024-07-15 16:23:52.510957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523056 ] 00:32:16.896 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.896 [2024-07-15 16:23:52.576783] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.896 [2024-07-15 16:23:52.654202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.468 16:23:53 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:17.468 16:23:53 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:17.468 16:23:53 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:17.468 16:23:53 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.468 16:23:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:17.468 [2024-07-15 16:23:53.283135] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:17.468 null0 00:32:17.729 [2024-07-15 16:23:53.315175] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:17.729 [2024-07-15 16:23:53.315509] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:17.729 [2024-07-15 16:23:53.323187] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.729 16:23:53 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:17.729 [2024-07-15 16:23:53.335219] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:17.729 request: 00:32:17.729 { 00:32:17.729 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:17.729 "secure_channel": false, 00:32:17.729 "listen_address": { 00:32:17.729 "trtype": "tcp", 00:32:17.729 "traddr": "127.0.0.1", 00:32:17.729 "trsvcid": "4420" 00:32:17.729 }, 00:32:17.729 "method": "nvmf_subsystem_add_listener", 00:32:17.729 "req_id": 1 00:32:17.729 } 00:32:17.729 Got JSON-RPC error response 00:32:17.729 response: 00:32:17.729 { 00:32:17.729 "code": -32602, 00:32:17.729 "message": "Invalid parameters" 00:32:17.729 } 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:17.729 16:23:53 keyring_file -- keyring/file.sh@46 -- # bperfpid=2523213 00:32:17.729 16:23:53 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2523213 /var/tmp/bperf.sock 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2523213 ']' 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:17.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:17.729 16:23:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:17.729 16:23:53 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:17.729 [2024-07-15 16:23:53.387782] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:32:17.729 [2024-07-15 16:23:53.387829] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523213 ] 00:32:17.729 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.729 [2024-07-15 16:23:53.462432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.729 [2024-07-15 16:23:53.526068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.301 16:23:54 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:18.301 16:23:54 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:18.301 16:23:54 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0JAOpvIaaF 00:32:18.301 16:23:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0JAOpvIaaF 00:32:18.562 16:23:54 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.wTesNlfxQm 00:32:18.562 16:23:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.wTesNlfxQm 00:32:18.824 16:23:54 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:18.824 16:23:54 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:18.824 16:23:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:18.824 16:23:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:18.824 16:23:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:18.824 16:23:54 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.0JAOpvIaaF == \/\t\m\p\/\t\m\p\.\0\J\A\O\p\v\I\a\a\F ]] 00:32:18.824 16:23:54 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:18.824 16:23:54 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:18.824 16:23:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:18.824 16:23:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:18.824 16:23:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.085 16:23:54 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.wTesNlfxQm == \/\t\m\p\/\t\m\p\.\w\T\e\s\N\l\f\x\Q\m ]] 00:32:19.085 16:23:54 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:19.085 16:23:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:19.085 16:23:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.085 16:23:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.085 16:23:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:19.085 16:23:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.346 16:23:54 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:19.346 16:23:54 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:19.346 16:23:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:19.346 16:23:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.346 16:23:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.346 16:23:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:19.346 16:23:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.346 16:23:55 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:19.346 16:23:55 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.346 16:23:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.606 [2024-07-15 16:23:55.230499] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:19.606 nvme0n1 00:32:19.606 16:23:55 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:19.606 16:23:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:19.606 16:23:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.606 16:23:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.606 16:23:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:19.606 16:23:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.866 16:23:55 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:19.866 16:23:55 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:19.866 16:23:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:19.866 16:23:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.866 16:23:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:19.866 16:23:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.866 16:23:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.866 16:23:55 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:19.866 16:23:55 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:20.127 Running I/O for 1 seconds... 00:32:21.066 00:32:21.066 Latency(us) 00:32:21.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:21.066 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:21.066 nvme0n1 : 1.02 7099.11 27.73 0.00 0.00 17877.16 10922.67 57671.68 00:32:21.066 =================================================================================================================== 00:32:21.066 Total : 7099.11 27.73 0.00 0.00 17877.16 10922.67 57671.68 00:32:21.066 0 00:32:21.066 16:23:56 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:21.066 16:23:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:21.338 16:23:56 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:21.338 16:23:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:21.338 16:23:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:21.338 16:23:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.338 16:23:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:21.338 16:23:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.338 16:23:57 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:21.338 16:23:57 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:21.338 16:23:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:21.338 16:23:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:21.338 16:23:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.338 16:23:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:21.338 16:23:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.610 16:23:57 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:21.610 16:23:57 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:21.610 16:23:57 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:21.610 16:23:57 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:21.610 16:23:57 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:21.610 16:23:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:21.610 16:23:57 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:21.610 16:23:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:21.610 16:23:57 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:21.610 16:23:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:21.610 [2024-07-15 16:23:57.378567] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:21.610 [2024-07-15 16:23:57.379372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154d9d0 (107): Transport endpoint is not connected 00:32:21.610 [2024-07-15 16:23:57.380368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154d9d0 (9): Bad file descriptor 00:32:21.610 [2024-07-15 16:23:57.381370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:21.610 [2024-07-15 16:23:57.381377] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:21.610 [2024-07-15 16:23:57.381382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:21.610 request: 00:32:21.610 { 00:32:21.610 "name": "nvme0", 00:32:21.610 "trtype": "tcp", 00:32:21.610 "traddr": "127.0.0.1", 00:32:21.610 "adrfam": "ipv4", 00:32:21.610 "trsvcid": "4420", 00:32:21.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:21.610 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:21.610 "prchk_reftag": false, 00:32:21.610 "prchk_guard": false, 00:32:21.610 "hdgst": false, 00:32:21.610 "ddgst": false, 00:32:21.610 "psk": "key1", 00:32:21.610 "method": "bdev_nvme_attach_controller", 00:32:21.610 "req_id": 1 00:32:21.610 } 00:32:21.610 Got JSON-RPC error response 00:32:21.610 response: 00:32:21.610 { 00:32:21.610 "code": -5, 00:32:21.610 "message": "Input/output error" 00:32:21.610 } 00:32:21.610 16:23:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:21.610 16:23:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:21.610 16:23:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:21.610 16:23:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:21.610 16:23:57 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:21.610 16:23:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:21.610 16:23:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:21.610 16:23:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.610 16:23:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:21.610 16:23:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.882 16:23:57 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:21.882 16:23:57 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:21.882 16:23:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:21.882 16:23:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:21.882 16:23:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.882 16:23:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:21.882 16:23:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.882 16:23:57 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:21.882 16:23:57 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:21.882 16:23:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:22.142 16:23:57 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:22.142 16:23:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:22.402 16:23:58 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:22.402 16:23:58 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:22.402 16:23:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.402 16:23:58 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:22.402 16:23:58 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.0JAOpvIaaF 00:32:22.402 16:23:58 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.0JAOpvIaaF 00:32:22.402 16:23:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:22.402 16:23:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.0JAOpvIaaF 00:32:22.402 16:23:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:22.402 16:23:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:22.402 16:23:58 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:22.402 16:23:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:22.402 16:23:58 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0JAOpvIaaF 00:32:22.402 16:23:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0JAOpvIaaF 00:32:22.662 [2024-07-15 16:23:58.325083] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.0JAOpvIaaF': 0100660 00:32:22.662 [2024-07-15 16:23:58.325100] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:22.662 request: 00:32:22.662 { 00:32:22.662 "name": "key0", 00:32:22.662 "path": "/tmp/tmp.0JAOpvIaaF", 00:32:22.662 "method": "keyring_file_add_key", 00:32:22.662 "req_id": 1 00:32:22.662 } 00:32:22.662 Got JSON-RPC error response 00:32:22.662 response: 00:32:22.662 { 00:32:22.662 "code": -1, 00:32:22.662 "message": "Operation not permitted" 00:32:22.662 } 00:32:22.662 16:23:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:22.662 16:23:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:22.662 16:23:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:22.662 16:23:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:22.662 16:23:58 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.0JAOpvIaaF 00:32:22.662 16:23:58 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0JAOpvIaaF 00:32:22.662 16:23:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0JAOpvIaaF 00:32:22.662 16:23:58 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.0JAOpvIaaF 00:32:22.662 16:23:58 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:22.662 16:23:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:22.947 16:23:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:22.947 16:23:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.947 16:23:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.947 16:23:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:22.947 16:23:58 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:22.947 16:23:58 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:22.947 16:23:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:22.947 16:23:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:22.947 16:23:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:22.947 16:23:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:22.947 16:23:58 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:22.947 16:23:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:22.947 16:23:58 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:22.947 16:23:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:23.208 [2024-07-15 16:23:58.806306] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.0JAOpvIaaF': No such file or directory 00:32:23.208 [2024-07-15 16:23:58.806320] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:23.208 [2024-07-15 16:23:58.806336] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:23.208 [2024-07-15 16:23:58.806341] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:23.208 [2024-07-15 16:23:58.806345] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:23.208 request: 00:32:23.208 { 00:32:23.208 "name": "nvme0", 00:32:23.208 "trtype": "tcp", 00:32:23.208 "traddr": "127.0.0.1", 00:32:23.208 "adrfam": "ipv4", 00:32:23.208 "trsvcid": "4420", 00:32:23.208 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:23.208 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:23.208 "prchk_reftag": false, 00:32:23.208 "prchk_guard": false, 00:32:23.208 "hdgst": false, 00:32:23.208 "ddgst": false, 00:32:23.208 "psk": "key0", 00:32:23.208 "method": "bdev_nvme_attach_controller", 00:32:23.208 "req_id": 1 00:32:23.208 } 00:32:23.208 Got JSON-RPC error response 00:32:23.208 response: 00:32:23.208 { 00:32:23.208 "code": -19, 00:32:23.208 "message": "No such device" 00:32:23.208 } 00:32:23.208 16:23:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:23.208 16:23:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:23.208 16:23:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:23.208 16:23:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:23.208 16:23:58 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:23.208 16:23:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:23.208 16:23:58 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:23.208 16:23:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:23.208 16:23:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:23.208 16:23:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:23.208 16:23:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:23.208 16:23:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:23.208 16:23:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nDfQbegYde 00:32:23.208 16:23:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:23.208 16:23:58 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:23.208 16:23:58 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.208 16:23:58 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:23.208 16:23:58 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:23.208 16:23:58 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:23.208 16:23:58 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:23.208 16:23:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nDfQbegYde 00:32:23.208 16:23:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nDfQbegYde 00:32:23.208 16:23:59 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.nDfQbegYde 00:32:23.208 16:23:59 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nDfQbegYde 00:32:23.208 16:23:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nDfQbegYde 00:32:23.468 16:23:59 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:23.468 16:23:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:23.729 nvme0n1 00:32:23.729 16:23:59 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:23.729 16:23:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:23.729 16:23:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:23.729 16:23:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:23.729 16:23:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:23.729 16:23:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.729 16:23:59 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:23.729 16:23:59 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:23.729 16:23:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:23.989 16:23:59 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:23.989 16:23:59 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:23.989 16:23:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:23.989 16:23:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:23.989 16:23:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.250 16:23:59 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:24.250 16:23:59 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:24.250 16:23:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:24.250 16:23:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:24.250 16:23:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:24.250 16:23:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:24.250 16:23:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.250 16:24:00 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:24.250 16:24:00 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:24.250 16:24:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:24.511 16:24:00 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:24.511 16:24:00 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:24.511 16:24:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.772 16:24:00 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:24.772 16:24:00 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nDfQbegYde 00:32:24.772 16:24:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nDfQbegYde 00:32:24.772 16:24:00 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.wTesNlfxQm 00:32:24.772 16:24:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.wTesNlfxQm 00:32:25.033 16:24:00 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:25.033 16:24:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:25.295 nvme0n1 00:32:25.295 16:24:00 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:25.295 16:24:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:25.295 16:24:01 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:25.295 "subsystems": [ 00:32:25.295 { 00:32:25.295 "subsystem": "keyring", 00:32:25.295 "config": [ 00:32:25.295 { 00:32:25.295 "method": "keyring_file_add_key", 00:32:25.295 "params": { 00:32:25.295 "name": "key0", 00:32:25.295 "path": "/tmp/tmp.nDfQbegYde" 00:32:25.295 } 00:32:25.295 }, 00:32:25.295 { 00:32:25.295 "method": "keyring_file_add_key", 00:32:25.295 "params": { 00:32:25.295 "name": "key1", 00:32:25.295 "path": "/tmp/tmp.wTesNlfxQm" 00:32:25.295 } 00:32:25.295 } 00:32:25.295 ] 00:32:25.295 }, 00:32:25.295 { 00:32:25.295 "subsystem": "iobuf", 00:32:25.295 "config": [ 00:32:25.295 { 00:32:25.295 "method": "iobuf_set_options", 00:32:25.295 "params": { 00:32:25.295 "small_pool_count": 8192, 00:32:25.295 "large_pool_count": 1024, 00:32:25.295 "small_bufsize": 8192, 00:32:25.295 "large_bufsize": 135168 00:32:25.295 } 00:32:25.295 } 00:32:25.295 ] 00:32:25.295 }, 00:32:25.295 { 00:32:25.295 "subsystem": "sock", 00:32:25.295 "config": [ 00:32:25.295 { 00:32:25.295 "method": "sock_set_default_impl", 00:32:25.295 "params": { 00:32:25.295 "impl_name": "posix" 00:32:25.295 } 00:32:25.295 }, 00:32:25.295 { 00:32:25.295 "method": "sock_impl_set_options", 00:32:25.295 "params": { 00:32:25.295 "impl_name": "ssl", 00:32:25.295 "recv_buf_size": 4096, 00:32:25.295 "send_buf_size": 4096, 00:32:25.295 "enable_recv_pipe": true, 00:32:25.295 "enable_quickack": false, 00:32:25.295 "enable_placement_id": 0, 00:32:25.295 "enable_zerocopy_send_server": true, 00:32:25.295 "enable_zerocopy_send_client": false, 00:32:25.295 "zerocopy_threshold": 0, 00:32:25.295 "tls_version": 0, 00:32:25.295 "enable_ktls": false 00:32:25.295 } 00:32:25.295 }, 00:32:25.295 { 00:32:25.295 "method": "sock_impl_set_options", 00:32:25.295 "params": { 00:32:25.295 "impl_name": "posix", 00:32:25.295 "recv_buf_size": 2097152, 00:32:25.295 "send_buf_size": 2097152, 00:32:25.295 "enable_recv_pipe": true, 00:32:25.295 "enable_quickack": false, 00:32:25.295 "enable_placement_id": 0, 00:32:25.295 "enable_zerocopy_send_server": true, 00:32:25.295 "enable_zerocopy_send_client": false, 00:32:25.295 "zerocopy_threshold": 0, 00:32:25.295 "tls_version": 0, 00:32:25.295 "enable_ktls": false 00:32:25.295 } 00:32:25.295 } 00:32:25.295 ] 00:32:25.295 }, 00:32:25.295 { 00:32:25.295 "subsystem": "vmd", 00:32:25.295 "config": [] 00:32:25.295 }, 00:32:25.296 { 00:32:25.296 "subsystem": "accel", 00:32:25.296 "config": [ 00:32:25.296 { 00:32:25.296 "method": "accel_set_options", 00:32:25.296 "params": { 00:32:25.296 "small_cache_size": 128, 00:32:25.296 "large_cache_size": 16, 00:32:25.296 "task_count": 2048, 00:32:25.296 "sequence_count": 2048, 00:32:25.296 "buf_count": 2048 00:32:25.296 } 00:32:25.296 } 00:32:25.296 ] 00:32:25.296 }, 00:32:25.296 { 00:32:25.296 "subsystem": "bdev", 00:32:25.296 "config": [ 00:32:25.296 { 00:32:25.296 "method": "bdev_set_options", 00:32:25.296 "params": { 00:32:25.296 "bdev_io_pool_size": 65535, 00:32:25.296 "bdev_io_cache_size": 256, 00:32:25.296 "bdev_auto_examine": true, 00:32:25.296 "iobuf_small_cache_size": 128, 00:32:25.296 "iobuf_large_cache_size": 16 00:32:25.296 } 00:32:25.296 }, 00:32:25.296 { 00:32:25.296 "method": "bdev_raid_set_options", 00:32:25.296 "params": { 00:32:25.296 "process_window_size_kb": 1024 00:32:25.296 } 00:32:25.296 }, 00:32:25.296 { 00:32:25.296 "method": "bdev_iscsi_set_options", 00:32:25.296 "params": { 00:32:25.296 "timeout_sec": 30 00:32:25.296 } 00:32:25.296 }, 00:32:25.296 { 00:32:25.296 "method": "bdev_nvme_set_options", 00:32:25.296 "params": { 00:32:25.296 "action_on_timeout": "none", 00:32:25.296 "timeout_us": 0, 00:32:25.296 "timeout_admin_us": 0, 00:32:25.296 "keep_alive_timeout_ms": 10000, 00:32:25.296 "arbitration_burst": 0, 00:32:25.296 "low_priority_weight": 0, 00:32:25.296 "medium_priority_weight": 0, 00:32:25.296 "high_priority_weight": 0, 00:32:25.296 "nvme_adminq_poll_period_us": 10000, 00:32:25.296 "nvme_ioq_poll_period_us": 0, 00:32:25.296 "io_queue_requests": 512, 00:32:25.296 "delay_cmd_submit": true, 00:32:25.296 "transport_retry_count": 4, 00:32:25.296 "bdev_retry_count": 3, 00:32:25.296 "transport_ack_timeout": 0, 00:32:25.296 "ctrlr_loss_timeout_sec": 0, 00:32:25.296 "reconnect_delay_sec": 0, 00:32:25.296 "fast_io_fail_timeout_sec": 0, 00:32:25.296 "disable_auto_failback": false, 00:32:25.296 "generate_uuids": false, 00:32:25.296 "transport_tos": 0, 00:32:25.296 "nvme_error_stat": false, 00:32:25.296 "rdma_srq_size": 0, 00:32:25.296 "io_path_stat": false, 00:32:25.296 "allow_accel_sequence": false, 00:32:25.296 "rdma_max_cq_size": 0, 00:32:25.296 "rdma_cm_event_timeout_ms": 0, 00:32:25.296 "dhchap_digests": [ 00:32:25.296 "sha256", 00:32:25.296 "sha384", 00:32:25.296 "sha512" 00:32:25.296 ], 00:32:25.296 "dhchap_dhgroups": [ 00:32:25.296 "null", 00:32:25.296 "ffdhe2048", 00:32:25.296 "ffdhe3072", 00:32:25.296 "ffdhe4096", 00:32:25.296 "ffdhe6144", 00:32:25.296 "ffdhe8192" 00:32:25.296 ] 00:32:25.296 } 00:32:25.296 }, 00:32:25.296 { 00:32:25.296 "method": "bdev_nvme_attach_controller", 00:32:25.296 "params": { 00:32:25.296 "name": "nvme0", 00:32:25.296 "trtype": "TCP", 00:32:25.296 "adrfam": "IPv4", 00:32:25.296 "traddr": "127.0.0.1", 00:32:25.296 "trsvcid": "4420", 00:32:25.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:25.296 "prchk_reftag": false, 00:32:25.296 "prchk_guard": false, 00:32:25.296 "ctrlr_loss_timeout_sec": 0, 00:32:25.296 "reconnect_delay_sec": 0, 00:32:25.296 "fast_io_fail_timeout_sec": 0, 00:32:25.296 "psk": "key0", 00:32:25.296 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:25.296 "hdgst": false, 00:32:25.296 "ddgst": false 00:32:25.296 } 00:32:25.296 }, 00:32:25.296 { 00:32:25.296 "method": "bdev_nvme_set_hotplug", 00:32:25.296 "params": { 00:32:25.296 "period_us": 100000, 00:32:25.296 "enable": false 00:32:25.296 } 00:32:25.296 }, 00:32:25.296 { 00:32:25.296 "method": "bdev_wait_for_examine" 00:32:25.296 } 00:32:25.296 ] 00:32:25.296 }, 00:32:25.296 { 00:32:25.296 "subsystem": "nbd", 00:32:25.296 "config": [] 00:32:25.296 } 00:32:25.296 ] 00:32:25.296 }' 00:32:25.296 16:24:01 keyring_file -- keyring/file.sh@114 -- # killprocess 2523213 00:32:25.296 16:24:01 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2523213 ']' 00:32:25.296 16:24:01 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2523213 00:32:25.296 16:24:01 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:25.558 16:24:01 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:25.558 16:24:01 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2523213 00:32:25.558 16:24:01 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:25.558 16:24:01 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:25.558 16:24:01 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2523213' 00:32:25.558 killing process with pid 2523213 00:32:25.558 16:24:01 keyring_file -- common/autotest_common.sh@967 -- # kill 2523213 00:32:25.558 Received shutdown signal, test time was about 1.000000 seconds 00:32:25.558 00:32:25.558 Latency(us) 00:32:25.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.558 =================================================================================================================== 00:32:25.558 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:25.558 16:24:01 keyring_file -- common/autotest_common.sh@972 -- # wait 2523213 00:32:25.558 16:24:01 keyring_file -- keyring/file.sh@117 -- # bperfpid=2524886 00:32:25.558 16:24:01 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2524886 /var/tmp/bperf.sock 00:32:25.558 16:24:01 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2524886 ']' 00:32:25.558 16:24:01 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:25.558 16:24:01 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:25.558 16:24:01 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:25.558 16:24:01 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:25.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:25.558 16:24:01 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:25.558 16:24:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:25.558 16:24:01 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:25.558 "subsystems": [ 00:32:25.558 { 00:32:25.558 "subsystem": "keyring", 00:32:25.558 "config": [ 00:32:25.558 { 00:32:25.558 "method": "keyring_file_add_key", 00:32:25.558 "params": { 00:32:25.558 "name": "key0", 00:32:25.558 "path": "/tmp/tmp.nDfQbegYde" 00:32:25.558 } 00:32:25.558 }, 00:32:25.558 { 00:32:25.558 "method": "keyring_file_add_key", 00:32:25.558 "params": { 00:32:25.558 "name": "key1", 00:32:25.558 "path": "/tmp/tmp.wTesNlfxQm" 00:32:25.558 } 00:32:25.558 } 00:32:25.559 ] 00:32:25.559 }, 00:32:25.559 { 00:32:25.559 "subsystem": "iobuf", 00:32:25.559 "config": [ 00:32:25.559 { 00:32:25.559 "method": "iobuf_set_options", 00:32:25.559 "params": { 00:32:25.559 "small_pool_count": 8192, 00:32:25.559 "large_pool_count": 1024, 00:32:25.559 "small_bufsize": 8192, 00:32:25.559 "large_bufsize": 135168 00:32:25.559 } 00:32:25.559 } 00:32:25.559 ] 00:32:25.559 }, 00:32:25.559 { 00:32:25.559 "subsystem": "sock", 00:32:25.559 "config": [ 00:32:25.559 { 00:32:25.559 "method": "sock_set_default_impl", 00:32:25.559 "params": { 00:32:25.559 "impl_name": "posix" 00:32:25.559 } 00:32:25.559 }, 00:32:25.559 { 00:32:25.559 "method": "sock_impl_set_options", 00:32:25.559 "params": { 00:32:25.559 "impl_name": "ssl", 00:32:25.559 "recv_buf_size": 4096, 00:32:25.559 "send_buf_size": 4096, 00:32:25.559 "enable_recv_pipe": true, 00:32:25.559 "enable_quickack": false, 00:32:25.559 "enable_placement_id": 0, 00:32:25.559 "enable_zerocopy_send_server": true, 00:32:25.559 "enable_zerocopy_send_client": false, 00:32:25.559 "zerocopy_threshold": 0, 00:32:25.559 "tls_version": 0, 00:32:25.559 "enable_ktls": false 00:32:25.559 } 00:32:25.559 }, 00:32:25.559 { 00:32:25.559 "method": "sock_impl_set_options", 00:32:25.559 "params": { 00:32:25.559 "impl_name": "posix", 00:32:25.559 "recv_buf_size": 2097152, 00:32:25.559 "send_buf_size": 2097152, 00:32:25.559 "enable_recv_pipe": true, 00:32:25.559 "enable_quickack": false, 00:32:25.559 "enable_placement_id": 0, 00:32:25.559 "enable_zerocopy_send_server": true, 00:32:25.559 "enable_zerocopy_send_client": false, 00:32:25.559 "zerocopy_threshold": 0, 00:32:25.559 "tls_version": 0, 00:32:25.559 "enable_ktls": false 00:32:25.559 } 00:32:25.559 } 00:32:25.559 ] 00:32:25.559 }, 00:32:25.559 { 00:32:25.559 "subsystem": "vmd", 00:32:25.559 "config": [] 00:32:25.559 }, 00:32:25.559 { 00:32:25.559 "subsystem": "accel", 00:32:25.559 "config": [ 00:32:25.559 { 00:32:25.559 "method": "accel_set_options", 00:32:25.559 "params": { 00:32:25.559 "small_cache_size": 128, 00:32:25.559 "large_cache_size": 16, 00:32:25.559 "task_count": 2048, 00:32:25.559 "sequence_count": 2048, 00:32:25.559 "buf_count": 2048 00:32:25.559 } 00:32:25.559 } 00:32:25.559 ] 00:32:25.559 }, 00:32:25.559 { 00:32:25.559 "subsystem": "bdev", 00:32:25.559 "config": [ 00:32:25.559 { 00:32:25.559 "method": "bdev_set_options", 00:32:25.559 "params": { 00:32:25.559 "bdev_io_pool_size": 65535, 00:32:25.559 "bdev_io_cache_size": 256, 00:32:25.559 "bdev_auto_examine": true, 00:32:25.559 "iobuf_small_cache_size": 128, 00:32:25.559 "iobuf_large_cache_size": 16 00:32:25.559 } 00:32:25.559 }, 00:32:25.559 { 00:32:25.559 "method": "bdev_raid_set_options", 00:32:25.559 "params": { 00:32:25.559 "process_window_size_kb": 1024 00:32:25.559 } 00:32:25.559 }, 00:32:25.559 { 00:32:25.559 "method": "bdev_iscsi_set_options", 00:32:25.559 "params": { 00:32:25.559 "timeout_sec": 30 00:32:25.559 } 00:32:25.559 }, 00:32:25.559 { 00:32:25.559 "method": "bdev_nvme_set_options", 00:32:25.559 "params": { 00:32:25.559 "action_on_timeout": "none", 00:32:25.559 "timeout_us": 0, 00:32:25.559 "timeout_admin_us": 0, 00:32:25.559 "keep_alive_timeout_ms": 10000, 00:32:25.559 "arbitration_burst": 0, 00:32:25.559 "low_priority_weight": 0, 00:32:25.559 "medium_priority_weight": 0, 00:32:25.559 "high_priority_weight": 0, 00:32:25.559 "nvme_adminq_poll_period_us": 10000, 00:32:25.559 "nvme_ioq_poll_period_us": 0, 00:32:25.559 "io_queue_requests": 512, 00:32:25.559 "delay_cmd_submit": true, 00:32:25.559 "transport_retry_count": 4, 00:32:25.559 "bdev_retry_count": 3, 00:32:25.559 "transport_ack_timeout": 0, 00:32:25.559 "ctrlr_loss_timeout_sec": 0, 00:32:25.559 "reconnect_delay_sec": 0, 00:32:25.559 "fast_io_fail_timeout_sec": 0, 00:32:25.559 "disable_auto_failback": false, 00:32:25.559 "generate_uuids": false, 00:32:25.559 "transport_tos": 0, 00:32:25.559 "nvme_error_stat": false, 00:32:25.559 "rdma_srq_size": 0, 00:32:25.559 "io_path_stat": false, 00:32:25.559 "allow_accel_sequence": false, 00:32:25.559 "rdma_max_cq_size": 0, 00:32:25.559 "rdma_cm_event_timeout_ms": 0, 00:32:25.559 "dhchap_digests": [ 00:32:25.559 "sha256", 00:32:25.559 "sha384", 00:32:25.559 "sha512" 00:32:25.559 ], 00:32:25.559 "dhchap_dhgroups": [ 00:32:25.559 "null", 00:32:25.559 "ffdhe2048", 00:32:25.559 "ffdhe3072", 00:32:25.559 "ffdhe4096", 00:32:25.559 "ffdhe6144", 00:32:25.559 "ffdhe8192" 00:32:25.559 ] 00:32:25.559 } 00:32:25.559 }, 00:32:25.559 { 00:32:25.559 "method": "bdev_nvme_attach_controller", 00:32:25.559 "params": { 00:32:25.559 "name": "nvme0", 00:32:25.559 "trtype": "TCP", 00:32:25.559 "adrfam": "IPv4", 00:32:25.559 "traddr": "127.0.0.1", 00:32:25.559 "trsvcid": "4420", 00:32:25.559 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:25.559 "prchk_reftag": false, 00:32:25.559 "prchk_guard": false, 00:32:25.559 "ctrlr_loss_timeout_sec": 0, 00:32:25.559 "reconnect_delay_sec": 0, 00:32:25.559 "fast_io_fail_timeout_sec": 0, 00:32:25.559 "psk": "key0", 00:32:25.559 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:25.559 "hdgst": false, 00:32:25.559 "ddgst": false 00:32:25.559 } 00:32:25.559 }, 00:32:25.559 { 00:32:25.559 "method": "bdev_nvme_set_hotplug", 00:32:25.559 "params": { 00:32:25.559 "period_us": 100000, 00:32:25.559 "enable": false 00:32:25.559 } 00:32:25.559 }, 00:32:25.559 { 00:32:25.559 "method": "bdev_wait_for_examine" 00:32:25.559 } 00:32:25.559 ] 00:32:25.559 }, 00:32:25.559 { 00:32:25.559 "subsystem": "nbd", 00:32:25.559 "config": [] 00:32:25.559 } 00:32:25.559 ] 00:32:25.559 }' 00:32:25.560 [2024-07-15 16:24:01.351328] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:32:25.560 [2024-07-15 16:24:01.351384] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2524886 ] 00:32:25.560 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.821 [2024-07-15 16:24:01.424711] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.821 [2024-07-15 16:24:01.477852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.821 [2024-07-15 16:24:01.619278] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:26.393 16:24:02 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:26.393 16:24:02 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:26.393 16:24:02 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:26.393 16:24:02 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:26.393 16:24:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:26.653 16:24:02 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:26.653 16:24:02 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:26.653 16:24:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:26.653 16:24:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:26.653 16:24:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:26.653 16:24:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:26.653 16:24:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:26.653 16:24:02 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:26.653 16:24:02 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:26.653 16:24:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:26.653 16:24:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:26.653 16:24:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:26.653 16:24:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:26.653 16:24:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:26.913 16:24:02 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:26.913 16:24:02 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:26.913 16:24:02 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:26.913 16:24:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:26.913 16:24:02 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:26.913 16:24:02 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:26.913 16:24:02 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.nDfQbegYde /tmp/tmp.wTesNlfxQm 00:32:27.174 16:24:02 keyring_file -- keyring/file.sh@20 -- # killprocess 2524886 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2524886 ']' 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2524886 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2524886 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2524886' 00:32:27.174 killing process with pid 2524886 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@967 -- # kill 2524886 00:32:27.174 Received shutdown signal, test time was about 1.000000 seconds 00:32:27.174 00:32:27.174 Latency(us) 00:32:27.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.174 =================================================================================================================== 00:32:27.174 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@972 -- # wait 2524886 00:32:27.174 16:24:02 keyring_file -- keyring/file.sh@21 -- # killprocess 2523056 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2523056 ']' 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2523056 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2523056 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2523056' 00:32:27.174 killing process with pid 2523056 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@967 -- # kill 2523056 00:32:27.174 [2024-07-15 16:24:02.978441] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:27.174 16:24:02 keyring_file -- common/autotest_common.sh@972 -- # wait 2523056 00:32:27.434 00:32:27.434 real 0m10.995s 00:32:27.434 user 0m25.665s 00:32:27.434 sys 0m2.593s 00:32:27.434 16:24:03 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:27.434 16:24:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:27.434 ************************************ 00:32:27.434 END TEST keyring_file 00:32:27.434 ************************************ 00:32:27.434 16:24:03 -- common/autotest_common.sh@1142 -- # return 0 00:32:27.434 16:24:03 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:27.434 16:24:03 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:27.434 16:24:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:27.434 16:24:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:27.435 16:24:03 -- common/autotest_common.sh@10 -- # set +x 00:32:27.435 ************************************ 00:32:27.435 START TEST keyring_linux 00:32:27.435 ************************************ 00:32:27.435 16:24:03 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:27.697 * Looking for test storage... 00:32:27.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:27.697 16:24:03 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.697 16:24:03 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.697 16:24:03 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.697 16:24:03 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.697 16:24:03 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.697 16:24:03 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.697 16:24:03 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.697 16:24:03 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:27.697 16:24:03 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:27.697 16:24:03 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:27.697 16:24:03 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:27.697 16:24:03 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:27.697 16:24:03 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:27.697 16:24:03 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:27.697 16:24:03 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:27.697 /tmp/:spdk-test:key0 00:32:27.697 16:24:03 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:27.697 16:24:03 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:27.697 16:24:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:27.697 /tmp/:spdk-test:key1 00:32:27.697 16:24:03 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2525539 00:32:27.697 16:24:03 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2525539 00:32:27.697 16:24:03 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:27.697 16:24:03 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2525539 ']' 00:32:27.697 16:24:03 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.697 16:24:03 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:27.697 16:24:03 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.697 16:24:03 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:27.697 16:24:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:27.958 [2024-07-15 16:24:03.556232] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:32:27.958 [2024-07-15 16:24:03.556312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525539 ] 00:32:27.958 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.958 [2024-07-15 16:24:03.616618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.958 [2024-07-15 16:24:03.681325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.529 16:24:04 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:28.529 16:24:04 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:28.529 16:24:04 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:28.529 16:24:04 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.529 16:24:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:28.529 [2024-07-15 16:24:04.334841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.529 null0 00:32:28.529 [2024-07-15 16:24:04.366883] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:28.529 [2024-07-15 16:24:04.367282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:28.790 16:24:04 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.790 16:24:04 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:28.790 622229863 00:32:28.790 16:24:04 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:28.790 525212004 00:32:28.790 16:24:04 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2525569 00:32:28.790 16:24:04 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2525569 /var/tmp/bperf.sock 00:32:28.790 16:24:04 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:28.790 16:24:04 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2525569 ']' 00:32:28.790 16:24:04 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:28.790 16:24:04 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:28.790 16:24:04 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:28.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:28.790 16:24:04 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:28.790 16:24:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:28.790 [2024-07-15 16:24:04.442980] Starting SPDK v24.09-pre git sha1 97f71d59d / DPDK 24.03.0 initialization... 00:32:28.790 [2024-07-15 16:24:04.443027] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525569 ] 00:32:28.790 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.790 [2024-07-15 16:24:04.518171] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.790 [2024-07-15 16:24:04.572365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.362 16:24:05 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:29.362 16:24:05 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:29.362 16:24:05 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:29.362 16:24:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:29.622 16:24:05 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:29.622 16:24:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:29.882 16:24:05 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:29.882 16:24:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:29.882 [2024-07-15 16:24:05.687115] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:30.144 nvme0n1 00:32:30.144 16:24:05 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:30.144 16:24:05 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:30.144 16:24:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:30.144 16:24:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:30.144 16:24:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:30.144 16:24:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:30.144 16:24:05 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:30.144 16:24:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:30.144 16:24:05 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:30.144 16:24:05 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:30.144 16:24:05 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:30.144 16:24:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:30.144 16:24:05 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:30.404 16:24:06 keyring_linux -- keyring/linux.sh@25 -- # sn=622229863 00:32:30.404 16:24:06 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:30.404 16:24:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:30.404 16:24:06 keyring_linux -- keyring/linux.sh@26 -- # [[ 622229863 == \6\2\2\2\2\9\8\6\3 ]] 00:32:30.404 16:24:06 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 622229863 00:32:30.404 16:24:06 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:30.404 16:24:06 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:30.404 Running I/O for 1 seconds... 00:32:31.788 00:32:31.788 Latency(us) 00:32:31.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.788 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:31.788 nvme0n1 : 1.01 10280.11 40.16 0.00 0.00 12359.83 10594.99 23483.73 00:32:31.788 =================================================================================================================== 00:32:31.788 Total : 10280.11 40.16 0.00 0.00 12359.83 10594.99 23483.73 00:32:31.788 0 00:32:31.788 16:24:07 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:31.788 16:24:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:31.788 16:24:07 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:31.788 16:24:07 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:31.788 16:24:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:31.788 16:24:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:31.788 16:24:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:31.788 16:24:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:31.788 16:24:07 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:31.788 16:24:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:31.788 16:24:07 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:31.788 16:24:07 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:31.788 16:24:07 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:32:31.788 16:24:07 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:31.788 16:24:07 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:31.788 16:24:07 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:31.788 16:24:07 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:31.788 16:24:07 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:31.788 16:24:07 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:31.788 16:24:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:32.048 [2024-07-15 16:24:07.693240] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:32.048 [2024-07-15 16:24:07.694097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a5950 (107): Transport endpoint is not connected 00:32:32.048 [2024-07-15 16:24:07.695093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a5950 (9): Bad file descriptor 00:32:32.048 [2024-07-15 16:24:07.696094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:32.048 [2024-07-15 16:24:07.696101] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:32.048 [2024-07-15 16:24:07.696106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:32.048 request: 00:32:32.048 { 00:32:32.048 "name": "nvme0", 00:32:32.048 "trtype": "tcp", 00:32:32.048 "traddr": "127.0.0.1", 00:32:32.048 "adrfam": "ipv4", 00:32:32.048 "trsvcid": "4420", 00:32:32.048 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:32.048 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:32.048 "prchk_reftag": false, 00:32:32.048 "prchk_guard": false, 00:32:32.048 "hdgst": false, 00:32:32.048 "ddgst": false, 00:32:32.048 "psk": ":spdk-test:key1", 00:32:32.048 "method": "bdev_nvme_attach_controller", 00:32:32.048 "req_id": 1 00:32:32.048 } 00:32:32.048 Got JSON-RPC error response 00:32:32.048 response: 00:32:32.048 { 00:32:32.048 "code": -5, 00:32:32.048 "message": "Input/output error" 00:32:32.048 } 00:32:32.048 16:24:07 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:32:32.048 16:24:07 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:32.048 16:24:07 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:32.048 16:24:07 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:32.048 16:24:07 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:32.048 16:24:07 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:32.048 16:24:07 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:32.048 16:24:07 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:32.049 16:24:07 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:32.049 16:24:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:32.049 16:24:07 keyring_linux -- keyring/linux.sh@33 -- # sn=622229863 00:32:32.049 16:24:07 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 622229863 00:32:32.049 1 links removed 00:32:32.049 16:24:07 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:32.049 16:24:07 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:32.049 16:24:07 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:32.049 16:24:07 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:32.049 16:24:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:32.049 16:24:07 keyring_linux -- keyring/linux.sh@33 -- # sn=525212004 00:32:32.049 16:24:07 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 525212004 00:32:32.049 1 links removed 00:32:32.049 16:24:07 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2525569 00:32:32.049 16:24:07 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2525569 ']' 00:32:32.049 16:24:07 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2525569 00:32:32.049 16:24:07 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:32.049 16:24:07 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:32.049 16:24:07 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2525569 00:32:32.049 16:24:07 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:32.049 16:24:07 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:32.049 16:24:07 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2525569' 00:32:32.049 killing process with pid 2525569 00:32:32.049 16:24:07 keyring_linux -- common/autotest_common.sh@967 -- # kill 2525569 00:32:32.049 Received shutdown signal, test time was about 1.000000 seconds 00:32:32.049 00:32:32.049 Latency(us) 00:32:32.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.049 =================================================================================================================== 00:32:32.049 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:32.049 16:24:07 keyring_linux -- common/autotest_common.sh@972 -- # wait 2525569 00:32:32.049 16:24:07 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2525539 00:32:32.049 16:24:07 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2525539 ']' 00:32:32.049 16:24:07 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2525539 00:32:32.049 16:24:07 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:32.308 16:24:07 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:32.308 16:24:07 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2525539 00:32:32.308 16:24:07 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:32.308 16:24:07 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:32.308 16:24:07 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2525539' 00:32:32.308 killing process with pid 2525539 00:32:32.308 16:24:07 keyring_linux -- common/autotest_common.sh@967 -- # kill 2525539 00:32:32.308 16:24:07 keyring_linux -- common/autotest_common.sh@972 -- # wait 2525539 00:32:32.569 00:32:32.569 real 0m4.886s 00:32:32.569 user 0m8.302s 00:32:32.569 sys 0m1.333s 00:32:32.569 16:24:08 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:32.569 16:24:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:32.569 ************************************ 00:32:32.569 END TEST keyring_linux 00:32:32.569 ************************************ 00:32:32.569 16:24:08 -- common/autotest_common.sh@1142 -- # return 0 00:32:32.569 16:24:08 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:32.569 16:24:08 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:32.569 16:24:08 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:32.569 16:24:08 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:32.569 16:24:08 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:32.569 16:24:08 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:32.569 16:24:08 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:32.569 16:24:08 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:32.569 16:24:08 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:32.569 16:24:08 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:32.569 16:24:08 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:32.569 16:24:08 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:32.569 16:24:08 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:32.569 16:24:08 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:32.569 16:24:08 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:32.569 16:24:08 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:32.569 16:24:08 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:32.569 16:24:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:32.569 16:24:08 -- common/autotest_common.sh@10 -- # set +x 00:32:32.569 16:24:08 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:32.569 16:24:08 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:32.569 16:24:08 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:32.569 16:24:08 -- common/autotest_common.sh@10 -- # set +x 00:32:40.708 INFO: APP EXITING 00:32:40.708 INFO: killing all VMs 00:32:40.708 INFO: killing vhost app 00:32:40.708 WARN: no vhost pid file found 00:32:40.708 INFO: EXIT DONE 00:32:43.291 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:32:43.553 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:32:43.553 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:32:43.553 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:32:43.553 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:32:43.553 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:32:43.553 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:32:43.553 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:32:43.553 0000:65:00.0 (144d a80a): Already using the nvme driver 00:32:43.553 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:32:43.553 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:32:43.553 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:32:43.812 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:32:43.812 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:32:43.812 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:32:43.812 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:32:43.812 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:32:48.022 Cleaning 00:32:48.022 Removing: /var/run/dpdk/spdk0/config 00:32:48.022 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:48.022 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:48.022 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:48.022 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:48.022 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:48.022 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:48.022 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:48.022 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:48.022 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:48.023 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:48.023 Removing: /var/run/dpdk/spdk1/config 00:32:48.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:48.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:48.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:48.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:48.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:48.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:48.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:48.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:48.023 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:48.023 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:48.023 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:48.023 Removing: /var/run/dpdk/spdk2/config 00:32:48.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:48.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:48.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:48.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:48.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:48.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:48.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:48.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:48.023 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:48.023 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:48.023 Removing: /var/run/dpdk/spdk3/config 00:32:48.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:48.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:48.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:48.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:48.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:48.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:48.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:48.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:48.023 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:48.023 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:48.023 Removing: /var/run/dpdk/spdk4/config 00:32:48.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:48.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:48.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:48.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:48.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:48.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:48.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:48.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:48.023 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:48.023 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:48.023 Removing: /dev/shm/bdev_svc_trace.1 00:32:48.023 Removing: /dev/shm/nvmf_trace.0 00:32:48.023 Removing: /dev/shm/spdk_tgt_trace.pid2067361 00:32:48.023 Removing: /var/run/dpdk/spdk0 00:32:48.023 Removing: /var/run/dpdk/spdk1 00:32:48.023 Removing: /var/run/dpdk/spdk2 00:32:48.023 Removing: /var/run/dpdk/spdk3 00:32:48.023 Removing: /var/run/dpdk/spdk4 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2065769 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2067361 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2067900 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2069095 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2069297 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2070662 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2070765 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2071155 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2072550 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2073259 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2073646 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2073957 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2074239 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2074512 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2074864 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2075216 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2075520 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2076481 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2079907 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2080280 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2080640 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2080653 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2081201 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2081365 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2081740 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2082072 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2082352 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2082448 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2082805 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2082826 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2083305 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2083611 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2084001 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2084375 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2084396 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2084510 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2084819 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2085166 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2085521 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2085815 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2086004 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2086260 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2086609 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2086959 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2087312 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2087511 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2087712 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2088050 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2088401 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2088749 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2089026 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2089223 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2089493 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2089849 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2090201 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2090549 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2090618 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2091025 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2095476 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2148939 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2153977 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2165995 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2172373 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2177720 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2178627 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2185815 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2193020 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2193022 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2194033 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2195052 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2196133 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2196765 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2196919 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2197129 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2197393 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2197396 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2198402 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2199405 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2200413 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2201301 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2201416 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2201685 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2202918 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2204261 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2214536 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2214938 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2219980 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2227299 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2230362 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2242506 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2253185 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2255338 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2256520 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2276870 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2281746 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2313183 00:32:48.023 Removing: /var/run/dpdk/spdk_pid2318563 00:32:48.024 Removing: /var/run/dpdk/spdk_pid2320669 00:32:48.024 Removing: /var/run/dpdk/spdk_pid2323165 00:32:48.024 Removing: /var/run/dpdk/spdk_pid2323493 00:32:48.024 Removing: /var/run/dpdk/spdk_pid2323789 00:32:48.024 Removing: /var/run/dpdk/spdk_pid2323869 00:32:48.024 Removing: /var/run/dpdk/spdk_pid2324568 00:32:48.024 Removing: /var/run/dpdk/spdk_pid2326682 00:32:48.024 Removing: /var/run/dpdk/spdk_pid2327705 00:32:48.024 Removing: /var/run/dpdk/spdk_pid2328279 00:32:48.024 Removing: /var/run/dpdk/spdk_pid2330744 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2331448 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2332356 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2337214 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2349132 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2353944 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2361250 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2362973 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2364498 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2369689 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2375170 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2383908 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2383911 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2388796 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2389008 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2389294 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2389913 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2389963 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2395331 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2396007 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2401320 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2404506 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2411048 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2417308 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2427743 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2436219 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2436262 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2458849 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2459534 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2460215 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2460904 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2461962 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2462649 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2463352 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2464103 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2469109 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2469406 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2476786 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2477026 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2480183 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2487300 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2487329 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2493409 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2495684 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2497986 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2499383 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2501810 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2503111 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2513044 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2513709 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2514372 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2517292 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2517745 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2518337 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2523056 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2523213 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2524886 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2525539 00:32:48.285 Removing: /var/run/dpdk/spdk_pid2525569 00:32:48.285 Clean 00:32:48.547 16:24:24 -- common/autotest_common.sh@1451 -- # return 0 00:32:48.547 16:24:24 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:48.547 16:24:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:48.547 16:24:24 -- common/autotest_common.sh@10 -- # set +x 00:32:48.547 16:24:24 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:48.547 16:24:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:48.547 16:24:24 -- common/autotest_common.sh@10 -- # set +x 00:32:48.547 16:24:24 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:48.547 16:24:24 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:48.547 16:24:24 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:48.547 16:24:24 -- spdk/autotest.sh@391 -- # hash lcov 00:32:48.547 16:24:24 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:48.547 16:24:24 -- spdk/autotest.sh@393 -- # hostname 00:32:48.547 16:24:24 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:48.807 geninfo: WARNING: invalid characters removed from testname! 00:33:15.403 16:24:48 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:15.971 16:24:51 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:17.351 16:24:53 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:19.259 16:24:54 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:20.643 16:24:56 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:22.555 16:24:57 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:23.982 16:24:59 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:23.982 16:24:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.982 16:24:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:23.982 16:24:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.982 16:24:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.982 16:24:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.982 16:24:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.982 16:24:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.982 16:24:59 -- paths/export.sh@5 -- $ export PATH 00:33:23.983 16:24:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.983 16:24:59 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:23.983 16:24:59 -- common/autobuild_common.sh@444 -- $ date +%s 00:33:23.983 16:24:59 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721053499.XXXXXX 00:33:23.983 16:24:59 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721053499.R0p233 00:33:23.983 16:24:59 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:33:23.983 16:24:59 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:33:23.983 16:24:59 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:23.983 16:24:59 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:23.983 16:24:59 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:23.983 16:24:59 -- common/autobuild_common.sh@460 -- $ get_config_params 00:33:23.983 16:24:59 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:23.983 16:24:59 -- common/autotest_common.sh@10 -- $ set +x 00:33:23.983 16:24:59 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:23.983 16:24:59 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:33:23.983 16:24:59 -- pm/common@17 -- $ local monitor 00:33:23.983 16:24:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:23.983 16:24:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:23.983 16:24:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:23.983 16:24:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:23.983 16:24:59 -- pm/common@21 -- $ date +%s 00:33:23.983 16:24:59 -- pm/common@21 -- $ date +%s 00:33:23.983 16:24:59 -- pm/common@25 -- $ sleep 1 00:33:23.983 16:24:59 -- pm/common@21 -- $ date +%s 00:33:23.983 16:24:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721053499 00:33:23.983 16:24:59 -- pm/common@21 -- $ date +%s 00:33:23.983 16:24:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721053499 00:33:23.983 16:24:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721053499 00:33:23.983 16:24:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721053499 00:33:23.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721053499_collect-vmstat.pm.log 00:33:23.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721053499_collect-cpu-load.pm.log 00:33:23.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721053499_collect-cpu-temp.pm.log 00:33:23.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721053499_collect-bmc-pm.bmc.pm.log 00:33:24.926 16:25:00 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:33:24.926 16:25:00 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:24.926 16:25:00 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:24.926 16:25:00 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:24.926 16:25:00 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:24.926 16:25:00 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:24.926 16:25:00 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:24.926 16:25:00 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:24.926 16:25:00 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:24.926 16:25:00 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:24.926 16:25:00 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:24.926 16:25:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:24.926 16:25:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:24.926 16:25:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:24.926 16:25:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:24.926 16:25:00 -- pm/common@44 -- $ pid=2538518 00:33:24.926 16:25:00 -- pm/common@50 -- $ kill -TERM 2538518 00:33:24.926 16:25:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:24.926 16:25:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:24.926 16:25:00 -- pm/common@44 -- $ pid=2538519 00:33:24.926 16:25:00 -- pm/common@50 -- $ kill -TERM 2538519 00:33:24.926 16:25:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:24.926 16:25:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:24.926 16:25:00 -- pm/common@44 -- $ pid=2538521 00:33:24.926 16:25:00 -- pm/common@50 -- $ kill -TERM 2538521 00:33:24.927 16:25:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:24.927 16:25:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:24.927 16:25:00 -- pm/common@44 -- $ pid=2538545 00:33:24.927 16:25:00 -- pm/common@50 -- $ sudo -E kill -TERM 2538545 00:33:24.927 + [[ -n 1945984 ]] 00:33:24.927 + sudo kill 1945984 00:33:24.938 [Pipeline] } 00:33:24.960 [Pipeline] // stage 00:33:24.966 [Pipeline] } 00:33:24.990 [Pipeline] // timeout 00:33:24.996 [Pipeline] } 00:33:25.014 [Pipeline] // catchError 00:33:25.020 [Pipeline] } 00:33:25.039 [Pipeline] // wrap 00:33:25.046 [Pipeline] } 00:33:25.063 [Pipeline] // catchError 00:33:25.073 [Pipeline] stage 00:33:25.076 [Pipeline] { (Epilogue) 00:33:25.092 [Pipeline] catchError 00:33:25.094 [Pipeline] { 00:33:25.109 [Pipeline] echo 00:33:25.110 Cleanup processes 00:33:25.117 [Pipeline] sh 00:33:25.406 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:25.406 2538626 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:25.406 2539065 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:25.423 [Pipeline] sh 00:33:25.711 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:25.711 ++ grep -v 'sudo pgrep' 00:33:25.711 ++ awk '{print $1}' 00:33:25.711 + sudo kill -9 2538626 00:33:25.725 [Pipeline] sh 00:33:26.012 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:38.256 [Pipeline] sh 00:33:38.543 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:38.543 Artifacts sizes are good 00:33:38.558 [Pipeline] archiveArtifacts 00:33:38.565 Archiving artifacts 00:33:38.756 [Pipeline] sh 00:33:39.035 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:39.050 [Pipeline] cleanWs 00:33:39.060 [WS-CLEANUP] Deleting project workspace... 00:33:39.060 [WS-CLEANUP] Deferred wipeout is used... 00:33:39.067 [WS-CLEANUP] done 00:33:39.068 [Pipeline] } 00:33:39.087 [Pipeline] // catchError 00:33:39.098 [Pipeline] sh 00:33:39.381 + logger -p user.info -t JENKINS-CI 00:33:39.390 [Pipeline] } 00:33:39.406 [Pipeline] // stage 00:33:39.410 [Pipeline] } 00:33:39.426 [Pipeline] // node 00:33:39.430 [Pipeline] End of Pipeline 00:33:39.585 Finished: SUCCESS